5.) Can “innocent until proven guilty” survive the next generation of predictive models?
I started thinking about this post as I was reading Dean Haycock’s book Murderous Minds, which is a book about the neuroscience of psychopathy. In that book, the author evokes The Minority Report, a Philip K. Dick story turned into a Tom Cruise movie about a police agency that uses three individuals who can see the future in order to prevent violent crimes before they happen. Haycock isn’t suggesting that precognition will ever be a tool to predict crime, but what if a combination of genetics, epigenetics, brain imaging, and other technology reached the point where the tendency toward violent psychopathy (not redundant, most psychopaths function fine in society and don’t commit crimes) could be predicted with a high degree of accuracy. [Note: unlike the Tom Cruise movie, no one is suggesting all violent crime could be anticipated because a lot of it is committed by people with no risk factors whatsoever.] One is likely to first go to the old refrain (Blackstone’s Formulation) that it’s better that 10 guilty men escape justice than one innocent man be punished. Now, imagine a loved one was killed by a person who was known to have a 99% likelihood of committing a violent crime?
Of course, one doesn’t have to lock the high-risk individuals away in prison. What about laws forcing one to take either non-invasive or invasive actions (from meditation retreats to genetic editing) to reduce one’s risk factors? That’s still a presumption of guilt based on a model that — given the vagaries of the human condition — could never be perfectly accurate.
4.) What does “trusted news source” mean in a world in which media outlets tailor their messages to support confirmation bias and avoid ugly cognitive dissonance? (i.e. to give viewers the warm-fuzzy [re: superior] feeling that keeps them watching rather than the messy, uneasy feelings that makes them prefer to bury their heads in sand and ignore any realities that conflict with their beliefs.) Arguably, this isn’t so much a problem for the next generation as for the present one. The aforementioned sci-fi legend, Philip K. Dick, addressed the idea of media manipulation in his stories as far back as the 1950’s. However, it’s a problem that could get much worse as computers get more sophisticated at targeting individuals with messages tailored to their personal beliefs and past experiences. What about if it goes past tweaking the message to encourage readership to manipulating the reader for more nefarious ends? I started to think about this when I got the i-Phone news feed which is full of provocative headlines designed to make one click, and — if one doesn’t click — one will probably come away with a completely false understanding of the realities of the story. As an example, I recently saw a headline to the effect of “AI can predict your death with 95% accuracy.” It turns out that it can only make this prediction after one has shown up in an emergency room and had one’s vital statistics taken and recorded. [Not to mention “95% accuracy” being completely meaningless — e.g. in what time frame — minute of death, day, year, decade? I can come up with the century of death with 95% accuracy, myself, given a large enough group.]
3.) When is it acceptable to shut down a self-aware Artificial Intelligence (AI), and — more importantly — will it let you? This is the most obvious and straightforward of the issues in this post. When is something that not only thinks but is aware of its thoughts considered equivalent to a human being for moral purposes, if ever?
2.) When is invisible surveillance acceptable / preferable? This idea came from a talk I heard by a Department of Homeland Security employee, back when I worked for Georgia Tech. He told us that the goal is eventually to get rid of the security screening checkpoints at the airport and have technology that would screen one as one walked down a corridor toward one’s gate. At first this sounds cool and awesome. No taking belts and shoes off. No running bags through metal detectors. No having to pitch your water bottle. No lines. No dropping your laptop because you’re precariously balancing multiple plastic bins and your carry-on luggage. [I don’t know if they would tackle one to the ground for having a toenail clipper in one’s bag or not, but — on the whole — this scheme seems awesome.] But then you realize that you’re being scanned to the most minute detail without your awareness.
One also has to consider the apathy effect. If one can make an activity painless, people stop being cognizant of it. Consider the realm of taxation. If you’re pulling a well-defined chunk of pay out of people’s income, they keep their eye on how much you’re taking. If you can bury that tax — e.g. in the price of goods or services, then people become far less likely to recognize rate changes or the like.
1.) If society can reduce pedophilic sexual abuse by allowing the production and dissemination of virtual reality child pornography (computer generated imagery only, no live models used, think computer games), should we? This idea is discussed in Jesse Bering’s book, Perv. It’s not a completely hypothetical question. There is some scholarly evidence that such computer-made pornography can assuage some pedophiles’ urges. However, the gut reaction of many [probably, most] people is “hell no!” It’s a prime example of emotion trumping reason. If you can reduce the amount of abuse by even a marginal amount, shouldn’t you do so given a lack of real costs / cons (i.e. presuming the cost of the material would be paid by the viewer, the only real cost to the public would be the icky feeling of knowing that such material exists in the world?)