Conspiracy Theorist’s Limerick

A conspiracy theorist wouldn't take shots -
claimed they were laced with tracking nanobots.
But he never left home
without taking his phone
which constantly triangulated his exact spot.

The Internet [Verse in Tetrameter]

We've reached the place where screams aren't heard.
You'd think they'd build into a din,
but one can't grasp a single word.
It has become silent as sin.

The angry words are shot to black -
that inky void that's unpatrolled,
It's silent, yet all're struck by flak.
Still, no one admits being sold.

But each life 's a product consumed.
They wail away the night and day,
pretending they're not rightly doomed.
Some will say that it's here to stay...

True, but are we?

BOOK REVIEW: I Breathed a Body by Zac Thompson

I Breathed a BodyI Breathed a Body by Zac Thompson
My rating: 4 of 5 stars

Amazon.in page

Out: October 5, 2021

This is one creepy commentary on technology run amok, and the alienation, desensitization, and disconnection that can result. [Or, at least that’s how I interpret it.] The protagonist is a driven social media executive who finds herself in territory that even she believes is over the line, despite her near psychopathic emotional disconnection. Another way to interpret the story is that the fungi that has taken parasitic control over humanity is making people see the world more as they would – i.e. with less cringing about death, decomposition, and deformation. [I happen to think that the fungi infection is a clever plot device to get across ideas about technology and modernity, but I could be wrong.]

Either way, I do think this is a clever story. There’s a species of Cordyceps fungi that takes control of the brain of an ant, steers it to the top of the nearest tree, and bursts out of the ant’s head to spread its spores from its new, elevated vantage point. This book reminded me of the Cordyceps fungi, and I wouldn’t be surprised if it inspired the story — with the requisite growth in sophistication to account for taking over a much more complex brain. This is a compelling and thought-provoking story, but it’s also gruesome and at times chaotic. If you can take horror, you’ll probably find it worth reading.

View all my reviews

POEM: Confessions of a Closet Luddite

Some people dream of shoving a boss in front of an inbound train. My own fantasies run to the smashing of computers and phones into a fine — if toxic — dust.

I don’t know what it says about me that:
-I equate these machines with the boss from that first scenario,
and, also,
-(like the aforementioned people) I’m too scared to go through with it.

I realize that these devices make life much easier…
except when they don’t, and it’s only then that I want to murder destroy them. Of course, the person who wants to murder her boss doesn’t want to do it when there is cake in the breakroom or when an unexpectedly generous bonus comes through — just, you know, the other times.

Unlike the original Luddites, I don’t hate machines out of a fear that they will replace me.
They already make a better economist than I ever did.
And even if the machines pick up their poetry-writing game,
that’s why I have the yoga instructor gig to fall back on…

[Because I’m convinced it will be decades before humans feel comfortable learning backbends from an entity that can twist rebar like a bendy-straw.]

No, I detest our silicon brethren because I have been sold a line that they can (and do) only do what I ask of them. [Hence the reason I don’t get so enraged by humans; anytime a person does something I ask is an unadulterated victory.] Instead, sometimes the computer does what I ask, but the next time something else entirely may happen. If the machines were consistently unable to complete the task, I would chalk that up to my failure to understand them. As it is, I’m left with a landscape of disturbing possibilities:

One, the machines are pranking me. (If this turns out to be the case, I think we can, eventually, be friends.)

Two, my computer’s desolate existence is causing it to try to commit “suicide by user.”

Three, we live in a glitching universe, and at any given moment the machine may produce a random unexpected result.

I don’t want to go back to the Stone Age, but I do have a newfound understanding of the allure of Steampunk. Contrary to the name, no one ever got punked by a steam engine. (Scalded and blown up, yes, but never punked.) The same cannot be said of a smartphone.

5 Bizarre Moral Dilemmas for Your Kids to Worry Over

5.) Can “innocent until proven guilty” survive the next generation of predictive models?

I started thinking about this post as I was reading Dean Haycock’s book Murderous Minds, which is a book about the neuroscience of psychopathy. In that book, the author evokes The Minority Report, a Philip K. Dick story turned into a Tom Cruise movie about a police agency that uses three individuals who can see the future in order to prevent violent crimes before they happen. Haycock isn’t suggesting that precognition will ever be a tool to predict crime, but what if a combination of genetics, epigenetics, brain imaging, and other technology reached the point where the tendency toward violent psychopathy (not redundant, most psychopaths function fine in society and don’t commit crimes) could be predicted with a high degree of accuracy. [Note: unlike the Tom Cruise movie, no one is suggesting all violent crime could be anticipated because a lot of it is committed by people with no risk factors whatsoever.] One is likely to first go to the old refrain (Blackstone’s Formulation) that it’s better that 10 guilty men escape justice than one innocent man be punished. Now, imagine a loved one was killed by a person who was known to have a 99% likelihood of committing a violent crime?

Of course, one doesn’t have to lock the high-risk individuals away in prison. What about laws forcing one to take either non-invasive or invasive actions (from meditation retreats to genetic editing) to reduce one’s risk factors? That’s still a presumption of guilt based on a model that  — given the vagaries of the human condition — could never be perfectly accurate.

 

4.) What does “trusted news source” mean in a world in which media outlets tailor their messages to support confirmation bias and avoid ugly cognitive dissonance? (i.e. to give viewers the warm-fuzzy [re: superior] feeling that keeps them watching rather than the messy, uneasy feelings that makes them prefer to bury their heads in sand and ignore any realities that conflict with their beliefs.) Arguably, this isn’t so much a problem for the next generation as for the present one. The aforementioned sci-fi legend, Philip K. Dick, addressed the idea of media manipulation in his stories as far back as the 1950’s. However, it’s a problem that could get much worse as computers get more sophisticated at targeting individuals with messages tailored to their personal beliefs and past experiences. What about if it goes past tweaking the message to encourage readership to manipulating the reader for more nefarious ends? I started to think about this when I got the i-Phone news feed which is full of provocative headlines designed to make one click, and — if one doesn’t click — one will probably come away with a completely false understanding of the realities of the story. As an example, I recently saw a headline to the effect of “AI can predict your death with 95% accuracy.” It turns out that it can only make this prediction after one has shown up in an emergency room and had one’s vital statistics taken and recorded. [Not to mention “95% accuracy” being completely meaningless — e.g. in what time frame — minute of death, day, year, decade? I can come up with the century of death with 95% accuracy, myself, given a large enough group.]

 

3.) When is it acceptable to shut down a self-aware Artificial Intelligence (AI), and — more importantly — will it let you?  This is the most obvious and straightforward of the issues in this post. When is something that not only thinks but is aware of its thoughts considered equivalent to a human being for moral purposes, if ever?

 

2.) When is invisible surveillance acceptable / preferable? This idea came from a talk I heard by a Department of Homeland Security employee, back when I worked for Georgia Tech. He told us that the goal is eventually to get rid of the security screening checkpoints at the airport and have technology that would screen one as one walked down a corridor toward one’s gate. At first this sounds cool and awesome. No taking belts and shoes off. No running bags through metal detectors. No having to pitch your water bottle. No lines. No dropping your laptop because you’re precariously balancing multiple plastic bins and your carry-on luggage. [I don’t know if they would tackle one to the ground for having a toenail clipper in one’s bag or not, but — on the whole — this scheme seems awesome.] But then you realize that you’re being scanned to the most minute detail without your awareness.

One also has to consider the apathy effect. If one can make an activity painless, people stop being cognizant of it. Consider the realm of taxation. If you’re pulling a well-defined chunk of pay out of people’s income, they keep their eye on how much you’re taking. If you can bury that tax — e.g. in the price of goods or services, then people become far less likely to recognize rate changes or the like.

 

1.) If society can reduce pedophilic sexual abuse by allowing the production and dissemination of virtual reality child pornography (computer generated imagery only, no live models used, think computer games), should we? This idea is discussed in Jesse Bering’s book, Perv. It’s not a completely hypothetical question. There is some scholarly evidence that such computer-made pornography can assuage some pedophiles’ urges. However, the gut reaction of many [probably, most] people is “hell no!” It’s a prime example of emotion trumping reason. If you can reduce the amount of abuse by even a marginal amount, shouldn’t you do so given a lack of real costs / cons (i.e. presuming the cost of the material would be paid by the viewer, the only real cost to the public would be the icky feeling of knowing that such material exists in the world?)

BOOK REVIEW: A Burglar’s Guide to the City by Geoff Manaugh

A Burglar's Guide to the CityA Burglar’s Guide to the City by Geoff Manaugh
My rating: 5 of 5 stars

Amazon page

 

This is a book about how people exploit the architecture and infrastructure of cities to abscond with other people’s property. Manaugh shows us both how the masterminds of burglary think outside the box “Ocean’s Eleven” style, as well as how the dim dull-wits and junkies botch burglaries in hilarious ways. In the process, the author also shines a light on the ways in which the law enforcement community has had to update its technological and tactical capabilities to counter these threats.

The book contains seven chapters. The first chapter lays the groundwork, particularly through discussion of the aforementioned extremes. On one hand, there is George Leonidas Leslie, an architect turned bank robber who would build accurate mockups in order to accurately rehearse robberies, and–on the other hand–there is the guy who used a ghillie suit disguise in a rock and mineral museum (which, not unsurprisingly, featured barren rock displays [down-playing vegetation] such that the guy stuck out like a guy in a ghillie suit in a rock display.)

Chapter 2 details what Manaugh learned about burglary and the fight against it through his interviews with law enforcement, and—in particular—the Los Angeles Police Department (LAPD) helicopter unit.

The next chapter focuses on how different types of buildings are violated by burglars, and apartment burglaries are prominent in the discussion. This isn’t just about how they breach the building, but how they discover when no one will be home.

Chapter 4 is entitled “tools of the trade” and it reflects upon the skill-set that Hollywood suggests is associated with burglars—i.e. lock-picking and safe-cracking–but which constitute a less common set of tactics than one might think. Burglars usually favor the messier / quicker approach of busting walls and locks.

Chapter 5 deals with a number of issues under the rubric of “inside jobs” but one of the most intriguing is its discussion of those who don’t break in at all, but rather who hide inside the target building awaiting closing time.

The penultimate chapter is about that ever-present concern of burglars, the getaway. And sometimes the secret is what Black Widow says in “Captain America: Civil War”: “The first rule of being on the run is walk, don’t run.” The final chapter is a wrap-up, including a conclusion to the George Leonidas Leslie story that was brought up in the first chapter.

There are notes and citations at the end of the book. There are no graphics. I think this book could have benefited from graphics. However, the author displayed such skill with language and story-telling that I didn’t seem to notice (or care) at the time of reading. I suspect Manaugh didn’t want to present too much detail for fear of being seen as an actual manual for crime, which this clearly is not.

I found this book fascinating, and think you would enjoy it if you have any interests in cities, security, civil engineering, architecture, or just have a healthy curiosity about how buildings and cities work.

View all my reviews

BOOK REVIEW: Thing Explainer by Randall Munroe

Thing Explainer: Complicated Stuff in Simple WordsThing Explainer: Complicated Stuff in Simple Words by Randall Munroe
My rating: 3 of 5 stars

Amazon page

Allow me the awkward start of explaining two things before offering my lukewarm reception of “Thing Explainer.” First, I loved “What If?” (this author’s previous book.) I thought that book was brilliant, gave it my highest rating, and eagerly anticipated Munroe’s next book (this one.) Second, I didn’t deduct because this book is a pain to read on an e-reader (at least the basic model I have.) That’s on me. I should’ve known better, and accept full responsibility. All I will say on the matter is to recommend that–if you still do want to read this book—you get a hard copy. [If you have an awesome reader, your results may vary.] The hard copy is large-format, and that’s useful because the graphics are so crucial and the text can be hard to read (some of it is light text / dark background and some is dark text / light background.)

The author uses only the most common 1,000 words of the English language to explain the operations of many modern technologies (e.g. laptops and helicopters) and scientific ideas (e.g. the workings of a cell or the sun.) It’s an intriguing question, and I can see why Munroe was interested in it. Can one convey the inner workings of objects like nuclear power plants or a tree with a rudimentary vocabulary? You can. Munroe does. However, the next question is, “Should you?” I come down on the side of “no.”

One might say, “But this is a book for kids [or people with a child-like grasp of language], you aren’t the target demographic.” Perhaps, but the book doesn’t do children any favors because the brainpower needed to puzzle out what the author is trying to convey through imprecise language can be more than is necessary to expand one’s vocabulary. [e.g. What do “tall road” or “shape checker” mean to you? If you went straight to “a bridge” and “a lock,” you may be more in tune with Munroe’s thinking than I, and thus more likely to find this book appealing.] For adults, it’s like reading essays by an eighth-grader who’s in no danger of being picked for the honor roll. Without the combination of the book’s graphics and a general background in science and technology, I suspect the book would be a muddle. I’m not against explaining ideas in simple terms, but I felt the book takes it too far and it becomes a distraction.

On the positive side, the graphics are great—sometimes funny while providing enough detail to get the point across without bogging one down. Also, Munroe’s sense of humor comes through here and there throughout the book (though it’s hampered by the lack of vocabulary.)

The book includes the list of words used as an Appendix (though you obviously won’t find the word “Appendix.”)

If it sounds like something that would interest you, pick it up. It’s hard to say that I’d recommend it, generally speaking. It’s funny and educational, but it’s also distracting and tedious. I neither hated it, nor loved it. I give it the median score of “meh.”

View all my reviews

BOOK REVIEW: The Science of The Hitchhiker’s Guide to the Galaxy by Michael Hanlon

The Science of the Hitchhiker's Guide to the GalaxyThe Science of the Hitchhiker’s Guide to the Galaxy by Michael Hanlon
My rating: 4 of 5 stars

Amazon page

There are a lot of “The Science of…” books out there using science fiction as a means to explain science. It’s easy to see the appeal for both readers and writers. For one thing, it makes complex and technical subjects approachable and palatable. For another, it provides a series of examples with which most readers will already be familiar. Triggering memories of a beloved book can’t hurt sales.

This “Science of” book is a little different in that it uses a work of absurdist humor as its muse. [In the unlikely event that you’re unfamiliar with Douglas Adams’ “The Hitchhiker’s Guide” series, you can access a review here.]One may wonder whether the book delves into this absurdity by contemplating the efficiency of infinite improbability drives (faster than light engines that run on unlikelihood) or the value of melancholy robots. It does and it doesn’t. For the most part, it relates the wildest creations of Adam’s mind to the nearest core notion that has scientific merit. [Though it does have a chapter on babel fish (an ichthyologically-based universal translator), but that’s a technology that’s already in the works—just not in fish form, but rather a phone ap.]

For the most part, the book explores science and technologies that are popular themes in the pop science literature. These include: the existence of intelligent extraterrestrial life, artificial intelligence, the end of the world, the beginning of the world, time travel, teleportation, cows that don’t mind being eaten (presumed to take the form of lab-grown meat, and not talking cows who crave flame-broiling), the simulation hypothesis (as related to Adams’ Total Perspective Vortex), parallel worlds, improbability (only tangentially related to the infinite impossibility drive, i.e. focused on understanding extremely unlikely events), and the answer to the ultimate question. There is also a chapter that I would argue is more in the realm of philosophy (or theology, depending upon your stance) than science, and that’s the question of the existence of a god or gods. (This isn’t to say that the question of whether god is necessary to explain the existence of the universe and our existence in it isn’t a question for science. It is. But Hanlon mostly critiques the numerous arguments for why there must be a god, and it’s easy to see why because they provide a lot of quality comic fodder.)

The book contains no graphics, but they aren’t missed. It has a brief “further reading” section of other popular science books, but it isn’t annotated in the manner of a scholarly work. It is well-researched and highly readable, not only because it hitches its wagon to Adams’ work but also because it’s filled with interesting tidbits of information and its own humor. The book was published in 2005, and so it’s a little old, but most of the technologies it explores are so advanced that the book has aged well. (But if you want the latest on a particular aspect of science fiction-cum-science, you may want to look at a more recent book.)

I’d recommend this book for fans of “The Hitchhiker’s Guide to the Galaxy,” and those interested in popular science generally. (Having read the five books of Adams’ “Hitchhiker’s Guide” trilogy will make the book more entertaining—though it’s not essential to make sense of it.)

View all my reviews