Tag Archives: Risk

Clear thinking needed in the war on quackery

Predictions about the likely judgements of posterity are always grounded on the assumption that there will be a posterity, peopled with sufficient numbers of wise people as can articulate a judgement that amounts to more than simply the idle contemplations of rainbows. So on this basis, let’s assume that there is a president to follow Trump and that this wise sufficiency has so far recovered its wits as to understand the difference between a narcissistic vulgarian and the evolved political culture that enabled his ascendancy to the position, albeit temporarily, of most powerful person on the planet.

As more than just an aside on the proper definition of power, this still potent posterity will acknowledge that the greatest prince is not he who sits at the centre of the widest error. Any sustainable definition of power cannot dwell too long in reflecting on history’s sad parade of grubby psychopaths and sweaty conmen who have humbled nations with their appetites and capacity to wreak havoc, while blighting the hopes of the very multitudes that have been beguiled into supporting them.

Real power lies in the patient dedication to building good that will last, and still more in the gift of nurturing cultures that will enable that good to flourish and endure.

On that definition, Trump’s verdict before posterity would be worse than unfavourable if determined solely on what has been achieved in the first six weeks of his presidency: “worse” simply because he has grasped at every opportunity to posture as a dissembling bully and pantomime villain, rendering risible any articulation of a cultural phenomenon – society, economics, style, reasoning – to which the term “Trumpian” might be applied as a descriptor. In fact, the man’s preternatural promotion of style over substance would render the essence of any Trumpian belief system as being far more concerned with the manipulation of perception that with the discernment of reality.

At the very heart of the Trumpian con is his promise of rendering great again something that was already functioning credibly, before setting out firmly on a course of systemic degradation and desperate brinksmanship, orchestrated with bullying blusters, rants, and whines. The man seen simply as a man, as distinct from a wider belief system and enabling culture, is essentially a clown confected by that culture as a joke upon itself: in short, the deification of Everyman as Loser.

The real pantomime will be the spectacle in monitoring those who are currently colluding with that flimflammer who will surely in time feel inspired to distance themselves before posterity renders its verdict so plainly that everyone will get the point. And those of us who adhere to notions of humanity’s continuing enhancement can bolster our cognitive and political systems against the recurrence of demagogic quackery.

Choose: your country back or the future now

It has been a summer of unworthy frenzies, with the forces of conservatism and pessimism crying to have their country back or made great again. On the other side, characterised by the throwbacks as themselves the champions of “Project Fear”, were those who deny that mankind is on a doomed course. More positively, more thoughtfully: they remain adherents to a belief in the powers of education, clear thinking and focused choices. Of moving forward, and not back to our future.

One of the more frequently referenced books in recent weeks has been Progress: Ten Reasons to Look Forward to the Future, by Cato Institute senior fellow Johan Norberg. Favourably cited by Simon Jenkins in The Guardian, and by an editorial in The Economist, Norberg’s book sets out the case for how much, and how rapidly, the world is improving – at least from the perspective of its human masters; how much the case for atavistic pessimism is fed by ignorance and greed (much of it encouraged by a complicit media); and most inspiringly, how the brightness of our future is defined by the potential in our accumulating intelligence. The Economist piece concludes:

“This book is a blast of good sense. The main reason why things tend to get better is that knowledge is cumulative and easily shared. As Mr Norberg puts it, “The most important resource is the human brain…which is pleasantly reproducible.”

By timely coincidence, intelligence both human and artificial has weighed in over the past week with positive expectations on our future. An article in MIT Technology Review is entitled “AI Wants to Be Your Bro, Not Your Foe” – possibly unsettling for those who might see either alternative as equally unsavoury, but its heart is in the right place. It reports on a Stanford University study on the social and economic implications of artificial intelligence, and the currently launching Center for Human-Compatible Intelligence at UC Berkeley. Both initiatives are cognisant of the dangers of enhanced intelligence, but inspired by the vast potential in applying it properly.

For a shot of pure-grade optimism to finish, five inspiring applications of exponential technologies that lit up the recently concluded Singularity University’s Global Summit included one called “udexter”. Artificial intelligence is being deployed to address the challenges of unemployment arising from . . . the advances of artificial intelligence. It promises to counterbalance the decline in “meaningless jobs” by unleashing the world’s creativity.

Emotion or reason: amygdala or frontal cortex?

Nobody reading the news can fail to notice the stresses being imposed upon rational thinking. On the day that the American Republican Convention concludes, a torrent of articles spews forth from the assembled journalists, of which this excellent piece in Salon is one of the better ones. All are variations pretty much on a single theme: if there is a collective modern mind, are we losing it?

The Salon piece references a second article, described as “truly terrifying” beyond what is also an electric and highly quotable stream of articulate acuity by British journalist Laurie Penny in The Guardian. In reporting another journalist breaking down in tears and exclaiming: “ . . . there’s so much hate . . . What is happening to this country,” Penny’s diagnosis, and one of her better lines among the many good ones, is that what we have is the natural result when “weaponised insincerity is applied to structured ignorance”.

Penny’s context is the brittle cynicism of the heartless Twittersphere, and the manipulation of the fearful, angry and dispossessed by people who must know better but don’t care. Her immediate context is the highly charged atmosphere of an American convention bear pit, never the most salubrious reflection of humanity in its cognitive finery. But she could as well have referenced the bluff demagogueries of politicians the world over, all contemptibly cashing in on terror, ignorance and want in the service of their grubby whimsies and self-imagined entitlements.

We must be wary of assuming too much, too far, and too soon about a future of SuperIntelligence or, more ludicrously, the dawning of a new age of convergent intelligence where the power of the human brain is augmented by so-called Artificial General Intelligence. Given the power of reason rendered truly Super by a necessarily reflective consciousness, we might expect any AGI worth its salt to ask of us:

With which human intelligence do you propose that we converge? Are we to be amazed at the brute twitchings of the human amygdala, driven by its primal urges and perpetually lurking tigers? Or is the frontal cortex that lifted you clear of the swamp of all those base appetites, now capable at last of getting at higher truths without deception? If neuroscience is understanding more about the reflective effects of aggression on the brain, can we relay this knowledge to the twitter trolls, the market grifters, and all those venal politicians?

Fearing not fear, but its exploitation

How has politics affected humanity’s power to conquer fear? On the centenary day of arguably the greatest failure of political will and imagination that British politics has ever known, we can speculate on lessons learned (if any) and project another 100 years into the future and ask if a breakthrough in SuperIntelligence is going to make a difference to the way in which humanity resolves its conflicts.

We look back to 1 July 1916, a day in which the British Army suffered 20,000 dead at the start of the Battle of the Somme, and wonder if SuperAI might have made a difference. What if it had been available . . . and deployed in the benefit/risk analysis of deciding yay or nay to send the flower of a generation into the teeth of the machine guns?

Would there have been articles in the newspapers in the years preceding the “war to end all wars”, speculating on the possible dangers of letting the SAI genie out of its bottle? Would these features have been fundamentally different from much more recent articles that have appeared in the last few days: this one in the Huffington Post speculating on AI and Geopolitics; or this one on the TechRepublic Innovation website, canvassing the views of several AI experts on the recent proposal by the Microsoft CEO of “10 new rules for Artificial Intelligence”. Implicit in all these speculations is that we must be careful that we don’t let loose the monster that might dislodge us from the Eden we have made of our little planet.

Another recent piece appeared in Psychology Today: “The Neuropsychological Impact of Fear-Based Politics” references the distinct cognitive systems that inspired Daniel Kahneman’s famous book, “Thinking, Fast and Slow”. Humans really are “in two minds”, the one driven by instinct, fear and snap judgements; the other slower, more deliberative and a more recent development in our cognitive history.

A behaviour as remarkable for its currency among the political classes as it is absent from the deliberative outputs of “intelligent” machines is the deceptive pandering to fear in the face of wiser counsel from people who actually know what they are talking about. The rewards for fear and ignorance were dreadful 100 years ago, and a happier future will depend as much upon our ability to tame our own base natures as on the admitted necessity of instilling ethics and empathy in SAI.

AI and human values: but which humans?

Can artificial intelligence develop without bias, clearly evolving without a tendency to favour any one set of values over another? An Op-Ed piece in the New York Times makes its view clear enough in its choice of title: “Artificial Intelligence’s White Guy Problem”. From face recognition software that mistakes black people for gorillas through predictive software that operates on assumptions about recidivism that would provoke outrage if articulated by human agents: these egregious mistakes have at least done us all a service in alerting us to the fact that algorithms are not by nature free of moral considerations. They work as distinct outputs of the people who craft them, reflecting the choices and betraying the values of their creators.

Bias, however unintentional, proceeds from privileged perspectives that go beyond the assumptions of colour and race. The Times Op-Ed instances the gender-driven discrimination revealed in last year’s academic study of Google job advertisements, revealing that men were more likely than women to be shown adverts for the more highly-paid jobs. Elsewhere, there has been the instance of political bias that arose in the story reported in the Financial Times, concerning Facebook’s alleged search bias in favouring liberal over conservative views. That led to a story in Salon that asked if “Google results could change an election.”

Work is being done on what a thoughtful blog on the London School of Economics website refers to as “algorithmic fairness – a systematic way to formalise hidden assumptions about fairness and justice so that we can evaluate an algorithm for how well it complies with these assumptions.” But vigilant circumspection must ensure that humanity never leaves it to algorithms to protect human values. In spinning the wheels of privilege, it may seem to the (mostly) wealthy, white, older, male population that is driving AI’s development that humanity’s existential risks lie some way off in the future. The reality for the lesser privileged is that for them, the impacts of implicit AI bias are being felt now.

Brexit and brains: blink and miss the logic

Britain’s decision to remain in, or exit from the European Union has surprised us all with the extent of the passions that have been stirred up. In the context of Brains and Minds and the exercise of applied intelligence, what is interesting is the extent to which the dialogue between “Remain” and “Brexit”, as exemplified in last night’s BBC extravaganza of a “debate”, betrayed our species’ trademark rush to judgement in the blink of an eye, the exercise of a wide range of cognitive biases in defense of unabashed passion, and disrespectful trashing of the motives of those who disagree.

Neither in last night’s debate nor in the dialogue that has dominated the UK’s attention for weeks now, has there been any shortage of good questions and answers on both sides. Neither has had a monopoly on reason or passion. Nor indeed have we been short on “mongering” either. One charges the other with spreading hate on immigration; the reverse monger is of Project Fear on the economy. The debate’s closing statements featured a defence of expertise from Remain, followed by a visceral plea from a politician who never recognised a rabble he could not rouse, deploying an imperfect analogy in defense of a Brexit case that was holed before the debate began by one of its advocates actually dismissing the claims of experts.

What does it mean that a former education secretary with a reputation for the deft application of intellect should scorn people with expertise? With all the clamouring for good evidence and information about a question as big as the challenge of Brexit, and given our slow ascent from the swamp to the summits of human achievement, do we not want to take note of the people who have made a living from considering evidence and thinking things through on some relevant topic, earning their stripes and tee shirts along the way?

People whose minds will not have been made up can feel what they like in responding to populist spasms, but cannot fail to recognise in Remain the greater benefits of sound circumspection. Appeals merely to hope and faith and “getting our country back” are modes of thinking that are declining in the face of reason, science, and a belief in the need to work together for a better world.

Control is certainly a human problem

One of the highlights of the CODE Conference that is concluding today in California has been the concern expressed by Bill Gates that, although these are exciting times for innovation and technological accomplishment, there are at least two considerable challenges. One concerns the threat that evolving AI represents to a wide spectrum of human jobs. The second is summed up in the sub-heading to the article linked to above: “The real challenge is ensuring humans stay in control.”

Is this really so? Maybe humans should retain control, but it is more than just a question for parlour, pub, or philosophy colloquium. A reflexive “Yes, of course they should,” is all too easy, but a slower and reflective “Yes, although . . .” would give us a chance to consider the assumptions we live with, and the direction that Life on Earth is taking, possibly independently of whatever it is we think we want.

How are we doing with the control we’ve got? In the miniscule percentage of 1% of the history of Planet Earth during which time humanity has been the governor, how have we done? And who are “we”? By humans do we mean a thoughtful, informed, judicious and circumspect collective of incorruptible and accountable grown-ups? Or are we talking about the United Nations? Or a chaotic inferno of babbling rabbles? If, alternatively, we are talking about smaller, indeed singular subsets of humanity, are we happy with the definitions of power that history has distinguished among the self-entitled 1%, or our kings, priests, plutocrats, and other monomaniacal psychopaths?

Must the limit of our ambition for Life on Earth be constrained by the limitations of our own humanity, or do we ourselves adapt in pursuit of a higher intelligence? Is it better that we retain control and fail, or can we give full reign to an emerging intelligence that assumes more executive control as we evolve a higher wisdom? It may be that “control”, as the watchword for the 100 millennia during which Homo sapiens has struggled for planetary dominance, is exchanged for “co-operation” as the key to a happier future for humanity and for the planet. And to the extent that control retains a role, it might best be applied to our own self-destructive impulses.

AI v climate change: The Race Against Time

In a blogspot with some pertinent links, author and “Occasional CEO” Eric Schultz wonders what will come first. Will the offspring of capitalism wedded to human nature lead to the extinction of our species, largely through the agency of climate change? Or will the exponential capabilities of Artificial Intelligence – ironically itself powered by the appetites of capitalism – see the conception and implementation of solutions to the challenges arising from climate change before we can incinerate ourselves? Put baldly, can advances in renewable energies, CO2-absorbing moss and desalination technologies counterbalance the excesses of consumerism, the profit motive and cognitive denial? Can AI make up for the deficits in human intelligence?

It makes for an amusing read and does no harm to its instructional impact that this blog turns deliberately on two key target years. In 2040, according to Nick Bostrom’s famous TEDTalk of last year, we will hit a tipping point in the measuring of artificial against human intelligence, beyond which moment – and this is the thrust of Bostrom’s message – the capabilities of AI as against human brainpower vanish off over the horizon. Within a few pulses of the eternal mind, our problems are all over, including climate change and possibly even the potential dangers of rampant AI.

Critically, the second target date in the Schultz blog is 2041, the year forecast as the focusing point of the future speculations in The Collapse of Western Civilization, a work of “science-based fiction” examining a species slow-roasting itself into oblivion. Its authors are clearly not the optimists that Bostrom is, and that Schultz may be.

Common to blogspot, book, and TEDTalk is the idea of mitigation. Through complex evolvings of nuance, increment, compromise and inspiration, these dates could move forward or backward in time. All the while, the crooked timber of humanity will muddle forward in denial about its responsibility in resolving the tragedy of the commons, pathetically hoping that a smart computer will solve its problem for it.

Re-purposing old brain pathways to new ends

Against the vast sweep of evolutionary time, the mere c8,000 human generations through which our species has evolved will not have seen a huge change in the interplay of physics, chemistry and biology that distinguish the functioning of the human brain. Our first old grandpa standing on his ancient plain will have gazed at the moon with something pretty close to the neural equipment we now possess. His first thoughts will have been for the importance of moonlight in betraying the predator or illuminating the water hole.

Later on, in appreciating the moon’s significance, his anthropomorphising imagination would divine a personality to whom appropriate propitiations might be directed to ensure a healthy crop – or so he would imagine. It would be another 7,998 generations before that same brain would figure out to put us on the moon that instigated all this lunar thinking in the first place.

Brain scientists at Carnegie Mellon University have moved us closer to grasping how it is that old brains can surpass old dogs in learning new tricks. Enhanced scanning technologies have enabled researchers at the university’s Scientific Imaging and Brain Research Center to determine how specific areas of the brain, once attuned to meeting the challenges of survival, can now be identified as the same areas that grasp the principles of advanced physics. It appears, for example, that the same neural systems “that process rhythmic periodicity when hearing a horse gallop also support the understanding of wave concepts” – wavelengths, sound and radio waves – in physics. It is thought that such knowledge will improve the teaching of science.

The CMU team are by no means the only scientists pursuing this line of enquiry. A selection of stories gleaned from just a few days’ of monitoring the brain science media reveal one study that “finds where you lost your train of thought”; another that believes it can pinpoint where personality resides in the brain (that would be the frontoparietal network); and yet another that can distinguish you by reading your brainwaves with no less than 100% accuracy. And this is all before we get into the vast phantasmagoria of scientific explorations of the human brain on magic mushrooms, ecstasy and LSD. And we will leave music for another day.

Reaction to risk: rationality or Rapture?

A thoughtful review on the website of the Institute for Ethics & Emerging Technologies (IEET) considers a recently published book by Phil Torres: “The End: What Science and Religion Tell Us About the Apocalypse”. A useful distinction is made between religious and secular eschatology. The latter considers these threats to the planet and to humanity from a rational and evidence-based perspective, seeing phenomena such as nuclear war, bio-engineered pandemics, and any malign Superintelligence as things to be avoided.

Religious eschatology, on the other hand, might involve all, some or none of these risks, whether or not in common with other large and lesser threats. But the key lies not in the actual risks, which are seen only as means to a greater end. It is the End Times itself that is the point of “God’s coming judgement and destruction”. Evidence-based rationality features somewhat less in deliberations on this side of the nut-house wall. What is made clear here is that, facing the prospect of humanity’s extinction, the cry of those in anticipation of their delivery into Eternal Life is “Bring it On”. These are not the people we want anywhere close to the nuclear codes, or indeed to any seats of influence.

Responsible citizens of Spaceship Earth will want to keep in touch with what Torres calls the secular eschatologists, working at places like the Centre for the Study of Existential Risk at the University of Cambridge; Nick Bostrom’s Future of Humanity Institute at the University of Oxford; the Future of Life Institute in Cambridge, Massachusetts; and the geographically decentralized Global Catastrophic Risk Institute. Their mission is to safeguard the prospect that a second, non-cataclysmic Big Bang might enable a benign Superintelligence, and not a stellar void of wasted talent, lost opportunities, and silence.