Thinking about thinking, and multi-coloured hats

A sudden immersion in the world of end-of-term report cards brought me face to face last week with a note of my grand-daughter’s ability to “empathise with characters using red hat thinking”. Ignoring my own pedantic grimace at the syntactical implication that the application of “red hat thinking” might be a curriculum objective, I passed quickly through the bemusement that the thoughts of Edward de Bono have so passed into the vernacular of North London primary schools that they are referenced in lower case, and moved on to engage in a little blue-sky thinking of my own. How would a Super Artificial Intelligence (SAI) look wearing de Bono’s Six Thinking Hats?

A flick through the colour descriptions suggests that the thinking being considered has little to do with neurological processes or activities of mind, and is employed more in the colloquial sense of applying “new ways of thinking” that are lateral, or outside the box, or indeed even blue-sky. They betoken attitudes and at best establish distinctions that may be useful in achieving a result, getting the sale, appreciating an alternative point of view, or changing a mind.

So, in summary, we focus on the facts (white), think positive (yellow), pick holes (black), empathise (red), blue-sky (green?) and then consolidate the process (blue). While the metaphor gets a bit unwieldy towards the end – perhaps the blue sky should really be blue, and the process of fertilising and growing the end result should be green – It still leaves the question to play with: what would SAI do with these hats? After all, is it reasonable to suppose that if human intelligence evolved through a consciousness that manifested these attitudes, a machine intelligence might evolve in a similar way? And if it did, how would it get on with all this headgear?

Bearing in mind the extent to which AI starts from a programmed foundation for which no hats are required, in any sense, and evolves into SAI through an emerging capacity to enhance its own potency, it’s hard to see how any of these hats will matter except insofar as a programmed requirement to get along with humans is retained. The binary distinction of white and black would probably keep those hats in play. But in the link above, we note that the black hat (Judgement) is described as “probably the most powerful and useful . . . but a problem if overused.”

Emotion or reason: amygdala or frontal cortex?

Nobody reading the news can fail to notice the stresses being imposed upon rational thinking. On the day that the American Republican Convention concludes, a torrent of articles spews forth from the assembled journalists, of which this excellent piece in Salon is one of the better ones. All are variations pretty much on a single theme: if there is a collective modern mind, are we losing it?

The Salon piece references a second article, described as “truly terrifying” beyond what is also an electric and highly quotable stream of articulate acuity by British journalist Laurie Penny in The Guardian. In reporting another journalist breaking down in tears and exclaiming: “ . . . there’s so much hate . . . What is happening to this country,” Penny’s diagnosis, and one of her better lines among the many good ones, is that what we have is the natural result when “weaponised insincerity is applied to structured ignorance”.

Penny’s context is the brittle cynicism of the heartless Twittersphere, and the manipulation of the fearful, angry and dispossessed by people who must know better but don’t care. Her immediate context is the highly charged atmosphere of an American convention bear pit, never the most salubrious reflection of humanity in its cognitive finery. But she could as well have referenced the bluff demagogueries of politicians the world over, all contemptibly cashing in on terror, ignorance and want in the service of their grubby whimsies and self-imagined entitlements.

We must be wary of assuming too much, too far, and too soon about a future of SuperIntelligence or, more ludicrously, the dawning of a new age of convergent intelligence where the power of the human brain is augmented by so-called Artificial General Intelligence. Given the power of reason rendered truly Super by a necessarily reflective consciousness, we might expect any AGI worth its salt to ask of us:

With which human intelligence do you propose that we converge? Are we to be amazed at the brute twitchings of the human amygdala, driven by its primal urges and perpetually lurking tigers? Or is the frontal cortex that lifted you clear of the swamp of all those base appetites, now capable at last of getting at higher truths without deception? If neuroscience is understanding more about the reflective effects of aggression on the brain, can we relay this knowledge to the twitter trolls, the market grifters, and all those venal politicians?

Could medical research be more adventurous?

An article last month on the Nautilus website posed an interesting question: “Why is Biomedical Research So Conservative?” Possible answers were summed up in the sub-headline: “Funding, incentives, and scepticism of theory make some scientists play it safe.” This is not to say that there is insufficient imagination going into the research application of advances in machine learning and data management in improving health outcomes. In fact, another article came out about the same time on the Fast Company website, discussing “How Artificial Intelligence is Bringing us Smarter Medicine”. It distinguished a host of impressive advances under five significant headings: drug creation, diagnosing disease, medication management, health assistance, and precision medicine.

People could say ah yes: well, that’s machine smarts at the applied rather than the theoretical end of medical research, although that is less true in the first and last of these categories: supercomputers are very much engaged in the analysis of molecular structures from which it is hoped new therapies will emerge, and in the vast data sets which are being created within the science of genomics as we move into a new era of precision, personalised medicine.

Nevertheless, it does seem that pure research in biology – as distinct from physics – has been playing it comparatively safe, and the Nautilus article provides the evidence for its ruminations. Natural language processing analysis of no fewer than two million research papers, and 300,000 patents arising from work with 30,000 molecules, showed a remarkable bias towards conditions common in the developed world and with an emphasis on areas where the research roads are already well travelled – predominantly in cancer and heart disease.

My own communications work in the dementia environment – specifically Alzheimer’s disease – suggests that another reason may be in play. Where medical research has been less conservative, more adventurous, and broadly more successful, there has been more collaboration and shared excitement around a commonly perceived mission. The more open-source, zealous, and entrepreneurial eco-system that has applied for decades in heart and in cancer research – and we see this now on steroids in the field of artificial intelligence developments – has yet to capture the imagination of the wider biomedical community, where the approach remains generally more careful, more academic and inward-looking: just, more conservative.

Religion is impeding our cognitive development

An article in today’s Guardian wonders if, with the accession of the UK’s new Prime Minister, Theresa May, the Conservative Party might be “doing God” again. The writer ponders on the sort of God this may be, suggesting that recent shifts in government policy may reflect an evolution in the culture beyond getting fussed about people’s sexual behaviour, focusing more on issues of social justice. The article does not comment on the possibility that some, or possibly all, of these issues might articulate an effectively moral direction without any assistance from scripture.

Humanity is maturing, leaving the Bible behind with its atavistic obsession with controlling promiscuity (“Is God a silverback?”, indeed). The quiet determinations of science continue to reveal wonders in creation beyond the imaginings of our comparatively ignorant ancestors of two millennia ago, although there is no shortage of efforts to reverse engineer those imaginings for the amazement of the gullible. Witness the consternation of America’s Bill Nye (the “Science Guy”) when he recently visited a recreation of Noah’s Ark in Kentucky. It seems that the price of progress is still vigilance.

Back in the evolving world, we are about to witness a quantum enhancement in human intelligence that may exceed in its impact what the evolution of vision appears to have accomplished in the Cambrian explosion of 545 million years ago, according to this Financial Times feature on a stunning new exhibition at London’s Natural History Museum.

It is hard to see what formally organised religion might contribute to all of this going forward. It will not be enough to maintain a charade that a focus on good works, social justice and community cohesion is sufficient when any of those activities could as easily be pursued for their own sakes. What is more troubling is the potentially retardant effect of embracing the cognitive dissonance that comes with cherry-picking what is estimable from holy texts while accommodating in the darker recesses of our minds the egregious bits of a belief system that, to put it mildly, has outlasted its credibility.

What sort of brain do we wish to bequeath to our generations to come? If there is to be a new Cambrian-style explosion in what the human brain can do, it will not be aided by clinging on to the intellectually untenable while denying the means by which we may grasp new ways of knowing, and thinking, and becoming.

Delusions of immortality

Although the Daily Mail is not a hotbed of deep thinking on the biology of aging and the potential of human immortality, the fact that it has produced a long feature on the digital uploading of brains is indicative of how the topics of human and artificial intelligence are becoming mainstream matters of interest. Typically, the article’s use of the terms “brain” and “mind” are pretty much interchangeable. There is no consideration of how any one person’s mind might be anything more than the animation of thoughts and feelings within the three pounds of blancmange that reside within our skulls.

Among the unchallenged and carelessly crafted assumptions driving this piece, the biggest is the absence of any reflection on the question of identity in considering the possibility of the immortality of the mind. In short, in what sense does Bill Bloke remain Bill Bloke when he is uploaded to the computer, or reconstituted through stem cell interventions on brains maintained on life support, or reanimated after some sort of cryonic limbo?

From all of these rapidly evolving technologies it is clear that something is going to emerge that is distinct from anything our world has to show now, even if marketing and wishful thinking will ensure that the early stages of this febrile new world are a grab bag of simulations, avatars, holograms, and downright hoaxes. But however many iterations of the real deal we evolve to throw in to that grab bag, nowhere in its darkness will we find good old Bill himself. Why is that?

One of our most significant cognitive biases is the apparently all-encompassing reality of the here and now: our brain perceives the world as a ticking and wholly immersive real thing from which our brains and our minds are separate phenomena. Except that they aren’t. Whatever the chances of the existence of parallel universes, the fact is that we are 7 billion people spinning along in this one, with a definite sense of July 2016ishness about this world we think we know. Geographic relocation or a short time in a coma can convey a sense of immense disorientation, but the times and places and people that collectively define us and shape our minds would all be absent from the new reality of the reconstituted Bill: awakened to a new place, a new time, and a new world of eternal bemusement.

Fearing not fear, but its exploitation

How has politics affected humanity’s power to conquer fear? On the centenary day of arguably the greatest failure of political will and imagination that British politics has ever known, we can speculate on lessons learned (if any) and project another 100 years into the future and ask if a breakthrough in SuperIntelligence is going to make a difference to the way in which humanity resolves its conflicts.

We look back to 1 July 1916, a day in which the British Army suffered 20,000 dead at the start of the Battle of the Somme, and wonder if SuperAI might have made a difference. What if it had been available . . . and deployed in the benefit/risk analysis of deciding yay or nay to send the flower of a generation into the teeth of the machine guns?

Would there have been articles in the newspapers in the years preceding the “war to end all wars”, speculating on the possible dangers of letting the SAI genie out of its bottle? Would these features have been fundamentally different from much more recent articles that have appeared in the last few days: this one in the Huffington Post speculating on AI and Geopolitics; or this one on the TechRepublic Innovation website, canvassing the views of several AI experts on the recent proposal by the Microsoft CEO of “10 new rules for Artificial Intelligence”. Implicit in all these speculations is that we must be careful that we don’t let loose the monster that might dislodge us from the Eden we have made of our little planet.

Another recent piece appeared in Psychology Today: “The Neuropsychological Impact of Fear-Based Politics” references the distinct cognitive systems that inspired Daniel Kahneman’s famous book, “Thinking, Fast and Slow”. Humans really are “in two minds”, the one driven by instinct, fear and snap judgements; the other slower, more deliberative and a more recent development in our cognitive history.

A behaviour as remarkable for its currency among the political classes as it is absent from the deliberative outputs of “intelligent” machines is the deceptive pandering to fear in the face of wiser counsel from people who actually know what they are talking about. The rewards for fear and ignorance were dreadful 100 years ago, and a happier future will depend as much upon our ability to tame our own base natures as on the admitted necessity of instilling ethics and empathy in SAI.