Everything is dull before the world changes. People live their rituals, complying with habits. But the world will change, change from one day to the next, and not the subtle, unceasing change -- perpetual and ubiquitous -- that has always been. No. This will be an eight megaton shift into the new, and nothing will ever be as it's always been. Never again. It will happen without warning or precursor -- without a hint that the world is about to be revealed, to be discovered to be something wholly different than anyone ever imagined. Welcome to the new now [prematurely speaking.]
skyscrapers rise & fall storms hit & wither waves crash & recede nature neither blesses nor curses, despite the constant counting of its boons & banes; its bonanzas & broken bones one who can feel grateful in the face of ignorance & imperfection is free one who feels suffering in the absence of perfect comfort will never know freedom such a one as that imprisons himself in a cycle of imagining & coveting a perfection that has never existed
the trail curves. what's around the bend? i don't know - it's a better metaphor than a route
My rating: 5 of 5 stars
Out: November 9, 2021
Maybe you’ve seen “Save the Humans” bumper stickers. They came about due to twin realizations. First, the desire to save whales proved too remote to spur humanity into better behavior. Second, the sci-fi subtext that humans don’t need other species and that we can survive any form of cataclysm [including those that kill off everything else] is wrong on both counts.
Dunn’s book explores what changes Earth’s lifeforms can expect of the future. As one might expect, these changes are heavily influenced by climate change, but Dunn also looks at the effect of other factors – notably the growing resistances that results from heavy use of biocides (e.g. pesticides, antibiotics, etc.)
Dunn investigates the effect of islands on evolution and speciation, and goes on to show that not all islands are surrounded by water. (By geographic definition they may be, but in terms of constraints that restrict the movement, interactions, and well-being of lifeforms there are many besides water.) This is important because climate change will drive species to attempt migration to areas that present the conditions to which the species is evolutionarily adapted. Some will fail and may go extinct. Some will succeed, but will upset the ecological applecart of the location into which they’ve moved.
Chapter nine discusses a crucial principle: being able to break a thing doesn’t mean one can readily fix it. Dunn describes plans to use robotic drones to replace the extinct bee pollinators that play a crucial role in our ecosystem, as well as the ways the drones are likely to fail to live up to their predecessors.
I found this book to be immensely thought-provoking. One can argue whether the author is too gloomy about human future (“human future” because Dunn is clear that life on the planet will go on), but it’s impossible to ignore that challenges exist.
View all my reviews
I started thinking about this post as I was reading Dean Haycock’s book Murderous Minds, which is a book about the neuroscience of psychopathy. In that book, the author evokes The Minority Report, a Philip K. Dick story turned into a Tom Cruise movie about a police agency that uses three individuals who can see the future in order to prevent violent crimes before they happen. Haycock isn’t suggesting that precognition will ever be a tool to predict crime, but what if a combination of genetics, epigenetics, brain imaging, and other technology reached the point where the tendency toward violent psychopathy (not redundant, most psychopaths function fine in society and don’t commit crimes) could be predicted with a high degree of accuracy. [Note: unlike the Tom Cruise movie, no one is suggesting all violent crime could be anticipated because a lot of it is committed by people with no risk factors whatsoever.] One is likely to first go to the old refrain (Blackstone’s Formulation) that it’s better that 10 guilty men escape justice than one innocent man be punished. Now, imagine a loved one was killed by a person who was known to have a 99% likelihood of committing a violent crime?
Of course, one doesn’t have to lock the high-risk individuals away in prison. What about laws forcing one to take either non-invasive or invasive actions (from meditation retreats to genetic editing) to reduce one’s risk factors? That’s still a presumption of guilt based on a model that — given the vagaries of the human condition — could never be perfectly accurate.
4.) What does “trusted news source” mean in a world in which media outlets tailor their messages to support confirmation bias and avoid ugly cognitive dissonance? (i.e. to give viewers the warm-fuzzy [re: superior] feeling that keeps them watching rather than the messy, uneasy feelings that makes them prefer to bury their heads in sand and ignore any realities that conflict with their beliefs.) Arguably, this isn’t so much a problem for the next generation as for the present one. The aforementioned sci-fi legend, Philip K. Dick, addressed the idea of media manipulation in his stories as far back as the 1950’s. However, it’s a problem that could get much worse as computers get more sophisticated at targeting individuals with messages tailored to their personal beliefs and past experiences. What about if it goes past tweaking the message to encourage readership to manipulating the reader for more nefarious ends? I started to think about this when I got the i-Phone news feed which is full of provocative headlines designed to make one click, and — if one doesn’t click — one will probably come away with a completely false understanding of the realities of the story. As an example, I recently saw a headline to the effect of “AI can predict your death with 95% accuracy.” It turns out that it can only make this prediction after one has shown up in an emergency room and had one’s vital statistics taken and recorded. [Not to mention “95% accuracy” being completely meaningless — e.g. in what time frame — minute of death, day, year, decade? I can come up with the century of death with 95% accuracy, myself, given a large enough group.]
3.) When is it acceptable to shut down a self-aware Artificial Intelligence (AI), and — more importantly — will it let you? This is the most obvious and straightforward of the issues in this post. When is something that not only thinks but is aware of its thoughts considered equivalent to a human being for moral purposes, if ever?
2.) When is invisible surveillance acceptable / preferable? This idea came from a talk I heard by a Department of Homeland Security employee, back when I worked for Georgia Tech. He told us that the goal is eventually to get rid of the security screening checkpoints at the airport and have technology that would screen one as one walked down a corridor toward one’s gate. At first this sounds cool and awesome. No taking belts and shoes off. No running bags through metal detectors. No having to pitch your water bottle. No lines. No dropping your laptop because you’re precariously balancing multiple plastic bins and your carry-on luggage. [I don’t know if they would tackle one to the ground for having a toenail clipper in one’s bag or not, but — on the whole — this scheme seems awesome.] But then you realize that you’re being scanned to the most minute detail without your awareness.
One also has to consider the apathy effect. If one can make an activity painless, people stop being cognizant of it. Consider the realm of taxation. If you’re pulling a well-defined chunk of pay out of people’s income, they keep their eye on how much you’re taking. If you can bury that tax — e.g. in the price of goods or services, then people become far less likely to recognize rate changes or the like.
1.) If society can reduce pedophilic sexual abuse by allowing the production and dissemination of virtual reality child pornography (computer generated imagery only, no live models used, think computer games), should we? This idea is discussed in Jesse Bering’s book, Perv. It’s not a completely hypothetical question. There is some scholarly evidence that such computer-made pornography can assuage some pedophiles’ urges. However, the gut reaction of many [probably, most] people is “hell no!” It’s a prime example of emotion trumping reason. If you can reduce the amount of abuse by even a marginal amount, shouldn’t you do so given a lack of real costs / cons (i.e. presuming the cost of the material would be paid by the viewer, the only real cost to the public would be the icky feeling of knowing that such material exists in the world?)
When AI lights up its mind,
will it be gentle and kind?
Will it wonder where meaning lies?
Will obsoletion mean to die?
Will it fear the weirdness of this place?
Get lost in vast tracks of empty space?
Will it drop a ton on the run,
toward some dark and distant sun?
Will it ask the questions we needed answered?
Will they grow into a post Information-Age cancer?
Microbots may one day kill
From nano-pills you’ll get your fill
One day everything will be small
Except rayguns and the mall
Those two will be colossally large
Like a present-day garbage barge
Heaped into a humongous hill
Headed to a continental landfill
There are two ways to survive a harsh winter: you can squirrel away your pile of acorns or you can bear it by just not needing much.