BOOK REVIEW: Invention and Innovation by Vaclav Smil

Inventions and Innovations: A Brief History of Infatuation, Overpromise, and DisappointmentInventions and Innovations: A Brief History of Infatuation, Overpromise, and Disappointment by Vaclav Smil
My rating: 5 of 5 stars

Amazon.in Page

Release Date: February 14, 2023

This book is about technological failures, the various ways in which technologies fail, and what lessons can be learned from these failures when hearing about new “world-changing breakthroughs.” The author explores nine technologies in depth, three for each of three varieties of technology failure.

The first group are those technologies that came online as promised, fixing a major problem, only to later be discovered to have side-effects deemed disastrous. The examples used are: leaded gasoline, DDT pesticide, and CFC (Chlorofluorocarbon) refrigerant. These technologies have come to be associated with health defects, air pollution, ecological collapse, and ozone depletion.

The second group (like the first) came online, but then never became competitive with existing technologies. The technologies presented as examples are: airships, nuclear fission for power production, and supersonic flight. Airships died out not only because of the Hindenburg disaster, but also because people preferred airplanes to a craft with the combined slowness of a boat and the crash potential of a plane. Nuclear fission became untenable for new commercial power plants due to a risk premium on build costs even though it doesn’t contribute to global warming and (once powerplants are paid for) is exceedingly cheap per kilowatt-hour. Supersonic flight was just too costly and short-ranged to compete with subsonic flight.

The final group are those technologies that failed to come online at all, despite intense efforts. These include travel by vacuum tube (i.e. Hyperloop, and, yes, like at the bank but with people inside) nitrogen-fixing grains (negating the need for fertilizer,) and nuclear fusion. Despite the celebrity billionaire love of Elon Musk and Richard Branson, hyperloop isn’t advancing because of challenges of maintaining vacuum over large distances. Making cereal grains that feature the nitrogen-fixing capabilities of legumes has also proven more difficult than expected. Nuclear fusion recently experienced a moment in the sun when, for the first time, they got more energy out of it than was needed to achieve it. (This wasn’t written about in the review copy I read, but I suspect will be mentioned in the finished book. At any rate, it doesn’t negate the author’s point as it’s still just one breakthrough of several that would be needed for the technology to be commercially viable.)

In the last chapter, the author gets into a number of other technologies with shorter discussions that are meant to illustrate specific issues with excessive technological optimism. He also investigates some technologies that he believes need to come down the pike, given our present and expected future challenges.

I found this book fascinating. The author seems to love being contrarian (he not only contests popular optimism by those overestimating technological progress but also contests the pessimism regarding the first group of failed technologies, so it appears that he enjoys pointing out how mass opinion [or the opinion of another smart person] is wrong.) That said, there’s a great deal of thought-provoking information in the book. And, I think it can help people more critically consider claims about up-and-coming technologies.


View all my reviews

The Internet [Verse in Tetrameter]

We've reached the place where screams aren't heard.
You'd think they'd build into a din,
but one can't grasp a single word.
It has become silent as sin.

The angry words are shot to black -
that inky void that's unpatrolled,
It's silent, yet all're struck by flak.
Still, no one admits being sold.

But each life 's a product consumed.
They wail away the night and day,
pretending they're not rightly doomed.
Some will say that it's here to stay...

True, but are we?

BOOK REVIEW: I Breathed a Body by Zac Thompson

I Breathed a BodyI Breathed a Body by Zac Thompson
My rating: 4 of 5 stars

Amazon.in page

Out: October 5, 2021

This is one creepy commentary on technology run amok, and the alienation, desensitization, and disconnection that can result. [Or, at least that’s how I interpret it.] The protagonist is a driven social media executive who finds herself in territory that even she believes is over the line, despite her near psychopathic emotional disconnection. Another way to interpret the story is that the fungi that has taken parasitic control over humanity is making people see the world more as they would – i.e. with less cringing about death, decomposition, and deformation. [I happen to think that the fungi infection is a clever plot device to get across ideas about technology and modernity, but I could be wrong.]

Either way, I do think this is a clever story. There’s a species of Cordyceps fungi that takes control of the brain of an ant, steers it to the top of the nearest tree, and bursts out of the ant’s head to spread its spores from its new, elevated vantage point. This book reminded me of the Cordyceps fungi, and I wouldn’t be surprised if it inspired the story — with the requisite growth in sophistication to account for taking over a much more complex brain. This is a compelling and thought-provoking story, but it’s also gruesome and at times chaotic. If you can take horror, you’ll probably find it worth reading.

View all my reviews

POEM: Misinformed GPS [PoMo Day 26 – Pantoum]

In one hundred meters, turn right!
In five hundred meters, U-turn!
Turn left now... Recalculating.
In three hundred meters, turn left!

In five hundred meters, U-turn!
U-turn, now... Recalculating.
In three hundred meters, turn left!
In one hundred meters, turn left!

U-turn, now... Recalculating...
Recalculating... Recalculating.
In two kilometers, turn right!
¿Debería hablar español?

Recalculating... Recalculating.
Do you think this is funny, Hal?
¿Debería hablar español?
Hal, I swear I'll have an Amber Alert put out on this car. You don't think I have computer friends? You don't think two can play at this game. You want to play thermonuclear war? It's on!

POEM: Confessions of a Closet Luddite

Some people dream of shoving a boss in front of an inbound train. My own fantasies run to the smashing of computers and phones into a fine — if toxic — dust.

I don’t know what it says about me that:
-I equate these machines with the boss from that first scenario,
and, also,
-(like the aforementioned people) I’m too scared to go through with it.

I realize that these devices make life much easier…
except when they don’t, and it’s only then that I want to murder destroy them. Of course, the person who wants to murder her boss doesn’t want to do it when there is cake in the breakroom or when an unexpectedly generous bonus comes through — just, you know, the other times.

Unlike the original Luddites, I don’t hate machines out of a fear that they will replace me.
They already make a better economist than I ever did.
And even if the machines pick up their poetry-writing game,
that’s why I have the yoga instructor gig to fall back on…

[Because I’m convinced it will be decades before humans feel comfortable learning backbends from an entity that can twist rebar like a bendy-straw.]

No, I detest our silicon brethren because I have been sold a line that they can (and do) only do what I ask of them. [Hence the reason I don’t get so enraged by humans; anytime a person does something I ask is an unadulterated victory.] Instead, sometimes the computer does what I ask, but the next time something else entirely may happen. If the machines were consistently unable to complete the task, I would chalk that up to my failure to understand them. As it is, I’m left with a landscape of disturbing possibilities:

One, the machines are pranking me. (If this turns out to be the case, I think we can, eventually, be friends.)

Two, my computer’s desolate existence is causing it to try to commit “suicide by user.”

Three, we live in a glitching universe, and at any given moment the machine may produce a random unexpected result.

I don’t want to go back to the Stone Age, but I do have a newfound understanding of the allure of Steampunk. Contrary to the name, no one ever got punked by a steam engine. (Scalded and blown up, yes, but never punked.) The same cannot be said of a smartphone.

BOOK REVIEW: The End of Killing by Rick Smith

The End of Killing: How Our Newest Technologies Can Solve Humanity’s Oldest ProblemThe End of Killing: How Our Newest Technologies Can Solve Humanity’s Oldest Problem by Rick Smith
My rating: 5 of 5 stars

Amazon page

 

Before one dismisses this book based on its seemingly pollyanna title, I’d suggest one think of it as an opening volley in what promises to be a series of crucial debates that will play out — one way or another — in the years to come. I believe Smith, founder and CEO of TASER and Axon, did a great job of presenting an argument for the pursuit of a range of technologies and policies intended to curb violence, as well as anticipating, presenting, and debating many of the opposing arguments. The book’s tone is more pragmatic than its bold and controversial title might suggest. That said, I don’t agree with all of the author’s conclusions by any means; though I do agree these questions need to be thoughtfully considered and debated.

I’d put the technologies and policies Smith advocates for into three basic categories. First, those that are nearly inevitable given societal winds of change and the nature of technological development (e.g. nonlethals becoming the primary weapons of the law enforcement community, automated systems being deployed to curb violence in schools, and ending the war on drugs.) Second, those which may be laudable, but which are hard to imagine coming to fruition in the world we live in [or are likely to see in the foreseeable future] (e.g. nonlethals becoming the primary [or exclusive] weapons of the military.) Third, those which are so full of the peril of unintended consequences as to be, frankly, terrifying – if not dystopian (i.e. the use of surveillance and profiling technologies to ACTIVELY attempt to prevent crimes that haven’t yet happened.)

Instead of describing the contents of the book chapter by chapter, I’ll discuss its ideas through the lens presented in the preceding paragraph – starting with the seemingly inevitable technologies. The central thrust of this book is that nonlethal technology needs to be developed / improved such that nonlethals can take up a progressively greater portion of weapons deployment and usage, with the aim of ultimately replacing firearms (and other lethal weapons) with nonlethal weapons. It’s important to note that Smith doesn’t suggest such a replacement could happen at present. He acknowledges that nonlethals are currently not as effective and reliable at incapacitating a threat as are firearms, and he isn’t advocating that people be put at risk by having to defend themselves with an inferior weapon. However, it seems reasonable, given the tremendous technological advances that have occurred, that nonlethal weaponry could become as or more effective than firearms.

If that doesn’t seem reasonable, I would remind one that firearms aren’t – as a rule — as instantaneously and definitively incapacitating as Hollywood portrays. One can find numerous cases of individuals still moving with a magazine’s worth of bullets in — or having passed through — them. (And that’s not to mention the lack of precision that tends to come with throwing a projectile via a controlled explosion.) The point being, one isn’t competing with perfection – so one doesn’t need to be perfect, only better than an existing [flawed] system.

Smith addresses the many dividends of nonlethal weapon usage over that of the lethal counterparts, and there are many. For one thing, killing isn’t easy on anyone (anyone who’s right in the head any way.) Even when a killing is legally justifiable and morally defensible (or even state-sanctioned) it often still results in traumatic stress. For another, there is the reduced cost of getting it wrong, and the adverse societal impacts (e.g. revenge killings) that result from wrongful deaths. Long story short, if one can produce a nonlethal that’s consistently as effective at incapacitating threat, it’s hard to make a rational argument for not fielding said weapon. The example of an automated system to respond to school shootings is an extension of the nonlethal weapons argument, as it’s ultimately based on nonlethals deployed by drone (or robotic system.) The chapter on the war on drugs (ch. 15) bears little discussion as it’s no news that that “war” has been a failure and a phenomenally ineffective way of addressing a societal problem.

That brings us to the laudable but unlikely category in which I put military use of nonlethals as primary (or exclusive) weapons. I’m not saying that military nonlethal weapon systems won’t continue to be developed, improved, and deployed. Given the degree to which war of late features non-state actors and unconventional warfare, it’s possible to imagine such weapons playing a dominant role in specific operations. After all, military members aren’t exempt from the psychological costs of killing. However, military forces deploying into a war zone with nonlethals as their primary weapons is almost impossible to imagine, especially considering the diversity of conditions and opponents for which a military needs to be ready.

In warfare, there is something called the “force multiplier” effect of wounding an enemy over killing an enemy. That is, if you wound someone, it takes two people to carry him or her, plus a chunk of a medic’s time. So, one can imagine four people being out of the fight because one person is severely wounded, versus the one person who would be out of commission (the dead person) if the individual were wounded. To be fair, Smith imagines technology (drones and robots) doing the heavy lifting. Still, it’s hard to imagine how one side in a conflict wins if they have to transport, warehouse, feed, and care for every enemy that is incapacitated while the other side is just killing away. Even if that one side is much more automated, it seems tremendously expensive – even for a relatively small-scale war.

That brings to me chapter five, which I found chilling. That chapter considers how artificial intelligence and surveillance programs (albeit with judicial oversight and other protections) could be used to anticipate crimes so that law enforcement could actively go forth to try to prevent them. (If this sounds a lot like the Tom Cruise movie loosely based on a PKD story, “Minority Report,” it’s because it essentially replaces the three pasty precognitives with computers and offers a bit more oversight. While Smith cautions against taking fictional stories too seriously, he employs some fictional scenarios that I believe might be as a pollyanna as the Spielberg film is dark.) At any rate, the word “actively” is crucial to my concern. I’m all in favor of what has historically been known as “preventive law enforcement” — activities such as putting more patrols in high crime areas, youth mentoring programs, and programs that inform people and businesses about how to be harder targets. However, the idea of police going out and engaging people as though a crime has been committed when none has been conjures images of cities on fire.

First, such an approach is predicated on watching everybody – at least everybody’s online activity – all the time. Which seems both dystopian and of limited effectiveness. [What percentage of people who post on FB that they want to shoot someone are likely to do so?] What about the judicial oversight and related protections? When is a warrant issued to surveil or arrest a person? The warrant is issued based on something an artificial intelligence system already flagged, meaning a government entity is watching everybody’s behavior on a constant fishing expedition. I’m not fond of that idea at all.

Second, we aren’t nearly as good at forecasting the future as we think. Violent crimes are rare and often spontaneous events, and that puts them in classes of behavior we are particularly bad at making predictions about. And, we haven’t eliminated the trade-off between type I and type II error. Imagine there is a question about whether individual X is to be detained based on what the AI spit out. X either was or wasn’t going to commit a crime. We can imagine a four-way matrix in which two of the solutions are correct (i.e. 1.) X was detained and was going to commit a crime; 2.) X wasn’t detained and he wasn’t going to commit a crime.) However, since we can’t know the future [like, at all] the potential remains for mis-estimating whether X was going to commit a crime. So, we have two potential errors (i.e. 1.) X wasn’t detained but he was going to commit a crime [and thus did]; 2.) X was detained but he wasn’t going to commit a crime [wrongful detention].) So, we want to minimize the first error because any violent crime is unacceptable? We go out and shake down more high risk individuals. While we succeed in preventing crimes, we also end up with more wrongful detention. Our legal system’s requirements with regards evidence suggest that as a society we are averse to wrongful disruption of a person’s freedom. Hence, while a “preponderance of evidence” is sufficient for cases where one might lose money in a civil case, if one might be imprisoned, the standard becomes “beyond a reasonable doubt.” Wrongfully detaining an individual when a crime was committed may be sad, but doing it when there is only a suspicion that a crime might likely be committed is tragic.

Of course, under present standards one can’t detain a person for very long. So you let them go, and maybe they do the crime – whether or not they intended to in the first place (ever heard someone say, “if you’re going to treat me like _______, I’m going to act like _______?” I’ll admit that it’s a bit far-fetched but if the system spurs one crime in a million subjects detained that wasn’t going to happen, is that acceptable?) Alternatively, one could place surveillance on the individual. In which case, one is essentially living in Stalin’s Soviet Union. Congratulations. It seems to me this approach offers either huge costs for a marginal gain, or you go full dystopia and knock out crime at a horrifying cost. Neither way seems appealing, but – then again – I am not willing to pay any price to keep anything bad from ever happening to anyone.

I found this book to have some fascinating ideas and to spur my thinking on subjects I might not otherwise have considered. While there was a significant bit that I found unsavory, I also discovered some ideas that were intriguing and worth pursuing. I would highly recommend this book for those interested in issues of technology and policy.

View all my reviews

5 Bizarre Moral Dilemmas for Your Kids to Worry Over

5.) Can “innocent until proven guilty” survive the next generation of predictive models?

I started thinking about this post as I was reading Dean Haycock’s book Murderous Minds, which is a book about the neuroscience of psychopathy. In that book, the author evokes The Minority Report, a Philip K. Dick story turned into a Tom Cruise movie about a police agency that uses three individuals who can see the future in order to prevent violent crimes before they happen. Haycock isn’t suggesting that precognition will ever be a tool to predict crime, but what if a combination of genetics, epigenetics, brain imaging, and other technology reached the point where the tendency toward violent psychopathy (not redundant, most psychopaths function fine in society and don’t commit crimes) could be predicted with a high degree of accuracy. [Note: unlike the Tom Cruise movie, no one is suggesting all violent crime could be anticipated because a lot of it is committed by people with no risk factors whatsoever.] One is likely to first go to the old refrain (Blackstone’s Formulation) that it’s better that 10 guilty men escape justice than one innocent man be punished. Now, imagine a loved one was killed by a person who was known to have a 99% likelihood of committing a violent crime?

Of course, one doesn’t have to lock the high-risk individuals away in prison. What about laws forcing one to take either non-invasive or invasive actions (from meditation retreats to genetic editing) to reduce one’s risk factors? That’s still a presumption of guilt based on a model that  — given the vagaries of the human condition — could never be perfectly accurate.

 

4.) What does “trusted news source” mean in a world in which media outlets tailor their messages to support confirmation bias and avoid ugly cognitive dissonance? (i.e. to give viewers the warm-fuzzy [re: superior] feeling that keeps them watching rather than the messy, uneasy feelings that makes them prefer to bury their heads in sand and ignore any realities that conflict with their beliefs.) Arguably, this isn’t so much a problem for the next generation as for the present one. The aforementioned sci-fi legend, Philip K. Dick, addressed the idea of media manipulation in his stories as far back as the 1950’s. However, it’s a problem that could get much worse as computers get more sophisticated at targeting individuals with messages tailored to their personal beliefs and past experiences. What about if it goes past tweaking the message to encourage readership to manipulating the reader for more nefarious ends? I started to think about this when I got the i-Phone news feed which is full of provocative headlines designed to make one click, and — if one doesn’t click — one will probably come away with a completely false understanding of the realities of the story. As an example, I recently saw a headline to the effect of “AI can predict your death with 95% accuracy.” It turns out that it can only make this prediction after one has shown up in an emergency room and had one’s vital statistics taken and recorded. [Not to mention “95% accuracy” being completely meaningless — e.g. in what time frame — minute of death, day, year, decade? I can come up with the century of death with 95% accuracy, myself, given a large enough group.]

 

3.) When is it acceptable to shut down a self-aware Artificial Intelligence (AI), and — more importantly — will it let you?  This is the most obvious and straightforward of the issues in this post. When is something that not only thinks but is aware of its thoughts considered equivalent to a human being for moral purposes, if ever?

 

2.) When is invisible surveillance acceptable / preferable? This idea came from a talk I heard by a Department of Homeland Security employee, back when I worked for Georgia Tech. He told us that the goal is eventually to get rid of the security screening checkpoints at the airport and have technology that would screen one as one walked down a corridor toward one’s gate. At first this sounds cool and awesome. No taking belts and shoes off. No running bags through metal detectors. No having to pitch your water bottle. No lines. No dropping your laptop because you’re precariously balancing multiple plastic bins and your carry-on luggage. [I don’t know if they would tackle one to the ground for having a toenail clipper in one’s bag or not, but — on the whole — this scheme seems awesome.] But then you realize that you’re being scanned to the most minute detail without your awareness.

One also has to consider the apathy effect. If one can make an activity painless, people stop being cognizant of it. Consider the realm of taxation. If you’re pulling a well-defined chunk of pay out of people’s income, they keep their eye on how much you’re taking. If you can bury that tax — e.g. in the price of goods or services, then people become far less likely to recognize rate changes or the like.

 

1.) If society can reduce pedophilic sexual abuse by allowing the production and dissemination of virtual reality child pornography (computer generated imagery only, no live models used, think computer games), should we? This idea is discussed in Jesse Bering’s book, Perv. It’s not a completely hypothetical question. There is some scholarly evidence that such computer-made pornography can assuage some pedophiles’ urges. However, the gut reaction of many [probably, most] people is “hell no!” It’s a prime example of emotion trumping reason. If you can reduce the amount of abuse by even a marginal amount, shouldn’t you do so given a lack of real costs / cons (i.e. presuming the cost of the material would be paid by the viewer, the only real cost to the public would be the icky feeling of knowing that such material exists in the world?)

BOOK REVIEW: Head in the Game by Brandon Sneed

Head in the Game: The Mental Engineering of the World's Greatest AthletesHead in the Game: The Mental Engineering of the World’s Greatest Athletes by Brandon Sneed
My rating: 5 of 5 stars

Amazon page

 

There are many factors that influence whether an athlete can reach an elite level. Physical factors such as VO2 max (maximum oxygen consumption) and musculature have long been at the fore in the minds of coaches and trainers, but they’ve never told the full story. There are athletes who have the muscles, lungs, and general physiology to dominate their sports who fall apart under pressure. One also sees the occasional athlete who is consistently good even though he seems puny by comparison to his peers. It used to be that mental performance was considered an endowed X-factor–you either had it or you didn’t. Coaches didn’t know how to coach for issues of the mind and often exacerbated problems with old school attitudes and approaches.

We’ve now entered a new era in which a bevy of techniques and technologies are being exploited to strengthen the mind and improve psychological deficiencies, just as gyms have always been used to build the body and combat physical deficiencies. These range from techniques of meditation and visualization that have been known to yogis and Buddhists for centuries to advanced technologies that have only become available in recent decades and which are constantly improving and being made obsolete. Sneed examines the gamut of these approaches as they are applied to improving performance in sports: from the meditative or therapeutic to the electronic or pharmacological. One no longer need give up on athletes who are great at their best, but who get the yips at the worst possible times. The performance of mediocre athletes can be improved and that of the best can be made more consistent.

Sneed has a unique qualification to write this book. He counts himself among the athletes who couldn’t reach his potential because of inconsistency rooted in psychological challenges. His willingness to be forthright about his own problems makes the book more engaging. His own stories are thrown into the mix with those of athletes from football, basketball, soccer, baseball, adventure sports, and mixed martial arts (MMA.)

The book’s 19 chapters are divided among four parts. The first part lays the groundwork, helping the reader understand the rudiments of how the brain works, doesn’t work, or works too hard for a competitor’s own good. A central theme is that the ability to analyze and train through the lens of neuroscience has removed some of the stigma that has always been attached to psychological issues in sports (not to mention the days when they were written off as weakness.) Much of the six chapters of Part I deal with assessment of the athlete’s baseline mental performance. The last chapter (Ch. 6) covers a range of topics that have been around a long time as they’ve been reevaluated through modern scientific research. These include religion, faith, superstition, meditation, visualization, and the immortal question of whether sex is good or bad for athletic performance.

The second part consists of five chapters taking on one fundamental truth: mind and body are not two disparate and independent entities. This section starts at the most logical point: breath. Practitioners of yoga (i.e. pranayama) and chi gong have known for centuries that breath can be used to influence one’s emotional state and level of mental clarity. Sneed evaluates the technology that is being used to help athletes master the same age-old lessons. Having laid the groundwork through breath, the section advances into biofeedback technology. There are two chapters in the book that deal with pharmacological approaches. One is in this section and it deals with legal (at least in some locales) substances such as caffeine, alcohol, nicotine, nootropics (alleged mind enhancing drugs), and marijuana. (The other is in the final part and it deals with hallucinogens.) There are also a couple of chapters on technologies used to produce or enhance desired mental states.

For most readers, the third part will be seen as the heart of the book. Having considered how to evaluate an athlete’s mental performance (Part I) and how to influence mind states by way of the body (Part II), this part explores the range of technologies that are used to exercise the mind in a manner analogous to working out the body. These technologies focus on a range of areas including improving the nervous system’s ability to take in information, process that information, and respond appropriately. Much of this part focuses on video games; albeit video games using state of the art virtual reality and which are customized to improvement of athletic performance. Some of the games are used to train general cognitive performance (e.g. Ch. 13) but others are specifically tailored to the game in question (i.e. Ch. 14.) Just as simulators are used in aviation, part of the advantage of these games is the ability to put players in progressively more challenging conditions.

The last part of the book was the most interesting to me, personally. [It’s also the part of the book that will be the most relevant and readable a few years down the road because it’s not as modern technology-centric as most of the book—especially Part III–is.] It’s entitled “The Spirit” and it explores X-factors to performance, but sans the assumption that these are endowments, but rather under the assumption they are trainable. The part has an important introduction that presents the research about how “soft” factors like gratitude play into outlook and performance. Then there are the Part’s three chapters. The first describes an experiment involving taking elite athletes into physically arduous conditions of the kind normally experienced by military special operations forces in survival training. The second tells the story of MMA fighter Kyle Kingsbury’s use of hallucinogenic substances (most intriguingly, ayahuasca, a powerful drug long used by Peruvian shamans.) Finally, the last chapter deals with sensory deprivation—a technology some will associate with the movie “Altered States” but which many athletes swear by.

The book has an extensive section on notations and sources organized by chapter. There are no graphics.

I enjoyed this book and found it to be informative. There are a number of books that explore the techniques and technologies of optimal mental performance, but this one develops a niche by focusing on the realm of sports and some of the technologies that are only available with the kind of deep-pockets seen in professional sports. The book is heavily weighted toward the technology part of the equation, which is both good and bad. If you’re reading it now (2017), it’s great because you’re getting an up-to-date discussion of the subject from the perspective of entities that are awash in money for tech. The downside is that this book won’t age well, at least not as well as it would if there was more emphasis on approaches that aren’t based on cutting-edge technology.

I’d recommend this book if you are interested in optimal human performance, and if you have an interest in sports, all the better.

View all my reviews