Havanaman/Bigstock.com

Fearing not fear, but its exploitation

How has politics affected humanity’s power to conquer fear? On the centenary day of arguably the greatest failure of political will and imagination that British politics has ever known, we can speculate on lessons learned (if any) and project another 100 years into the future and ask if a breakthrough in SuperIntelligence is going to make a difference to the way in which humanity resolves its conflicts.

We look back to 1 July 1916, a day in which the British Army suffered 20,000 dead at the start of the Battle of the Somme, and wonder if SuperAI might have made a difference. What if it had been available . . . and deployed in the benefit/risk analysis of deciding yay or nay to send the flower of a generation into the teeth of the machine guns?

Would there have been articles in the newspapers in the years preceding the “war to end all wars”, speculating on the possible dangers of letting the SAI genie out of its bottle? Would these features have been fundamentally different from much more recent articles that have appeared in the last few days: this one in the Huffington Post speculating on AI and Geopolitics; or this one on the TechRepublic Innovation website, canvassing the views of several AI experts on the recent proposal by the Microsoft CEO of “10 new rules for Artificial Intelligence”. Implicit in all these speculations is that we must be careful that we don’t let loose the monster that might dislodge us from the Eden we have made of our little planet.

Another recent piece appeared in Psychology Today: “The Neuropsychological Impact of Fear-Based Politics” references the distinct cognitive systems that inspired Daniel Kahneman’s famous book, “Thinking, Fast and Slow”. Humans really are “in two minds”, the one driven by instinct, fear and snap judgements; the other slower, more deliberative and a more recent development in our cognitive history.

A behaviour as remarkable for its currency among the political classes as it is absent from the deliberative outputs of “intelligent” machines is the deceptive pandering to fear in the face of wiser counsel from people who actually know what they are talking about. The rewards for fear and ignorance were dreadful 100 years ago, and a happier future will depend as much upon our ability to tame our own base natures as on the admitted necessity of instilling ethics and empathy in SAI.

Lively Stuff from Planet BAM!

Leave a Reply

Your email address will not be published. Required fields are marked *