Tag Archives: Consciousness

New Year offers promise for foxes and lunatics

As 2017 gears up for its short sprint to the inauguration of America’s next president, the mature media are recoiling at the prospects of the people whom the president-elect is gathering around him to help define the tone and agenda of his presidency. Whether we look at Energy, the Environment, or Education – and that’s just the letter E – the impression is not so much that American political culture will be driven by incompetents, as that the foxes and lunatics whose career missions have been to spread mayhem in specific areas have been put in charge of the very henhouses and asylums that the rest of us have been trying to protect from their rapacities.

A common theme in the media commentary is that this amounts to a war on science. It is certainly this, but it is more: it is a war on critical thinking, on expertise and, critically, on empathy. What may prove most corrosive is the impact upon the key quality that separates human intelligence from its emerging machine correlate. It is empathy that emerged above so many other qualities when the cognitive explosion of some 60,000 years ago set the erstwhile Monkey Mind on its journey to the new and far more expansive cultural horizons of Homo sapiens.

Thinking in another person’s headspace is the cognitive equivalent of walking in another man’s shoes. It requires a theory of mind that allows for another creature’s possession of one, and an active consciousness that an evolving relationship with that other mind will require either conquest or collaboration. Academics can argue over the tactics and indeed over the practicality of “arguing for victory” or they can, in understanding the validity of empathy, agree with philosopher Daniel Dennett as he proposes some “rules for criticising with kindness”.

Amidst all the predictions for 2017 as to how human and artificial intelligence will evolve, we may hear more over the coming months about the relationship between intelligence (of any kind) and consciousness. To what extent will a Super Artificial Intelligence require that it be conscious?

And will we ever get to a point of looking back on 2016 and saying:

Astrology? Tribalism? Religion? Capitalism? Trump? What ever were we thinking? Perhaps in empathising with those who carried us through the infancy of our species, we will allow that at least for some of these curiosities, it was a stage we had to go through.

Ha ha bloody ha, AI is getting into scary

A feature on Motherboard (and available from quite a few sites on this particularly frightening day in the calendar) informs readers that “MIT is teaching AI to Scare Us”. Well that’s just great. Anyone insufficiently nervous anyway about the potential perils of AI itself, or not already rendered catatonic in anxiety over the conniptions of the American election, can go onto the specially confected Nightmare Machine website and consult a specially prepared timeline that advances from the Celtic stirrings of Hallowe’en two millennia ago to this very year in which AI-boosted “hell itself breathes out contagion to the world.”

The highlight – or murky darkfest – feature of the site is the interactive gallery of faces and places, concocted and continually refined by algorithms seeking to define the essence of scary. So much of what we sense about horror is rather like our sense of what it is that makes humour funny: it is less induced from core principles but is rather deduced from whatever succeeds in eliciting the scream of terror or laughter. It cannot be a surprise, therefore, that the Nightmare Machine mission is proving perfect for machine learning to get its artificial fangs into. Website visitors rank the images for scariness and, the theory goes, the images get scarier.

Another school of thought, reflected in articles like this piece in Salon on the creepy clown phenomenon, sees the fright not so much in what others find frightening as in what serves as a projection of our own internal terrors. The clowns and gargoyles that stalk the political landscape are to a large extent projections of ourselves: of our own deepest fears for the more empathetic among us, or as simple avenging avatars for the morally bereft or culturally dispossessed.

When AI moves beyond its current picture recognition capabilities into deeper areas of understanding our own inner fears and darkest thoughts, the ultimate fright will no longer lie in some collection of pixels. It will seep from the knowing look you get from your android doppelganger — to all intents and purposes you to the very life — as your watching friend asks, “Which you is actually you?” Your friend doesn’t know, but you know, and of course it knows . . . and it knows that you know that it knows . . .

How does consciousness evaluate itself?

If “writing about music is like dancing about architecture”, perhaps the attempt to reflect conclusively on consciousness is like the old picture of Baron Munchausen trying to pull himself out of a swamp by his own pigtail. Despite the usual carpings in the commentary whenever any serious thinking is done online (gosh, if only the author had consulted with me first . . .) an article in Aeon Magazine by cognitive robotics professor Murray Shanahan of London’s Imperial College makes some important distinctions between human consciousness and what he terms “conscious exotica”. The key question he poses is summed up in the sub-headline: “From algorithms to aliens, could humans ever understand minds that are radically unlike our own?”

It’s a great question, even without wondering how much more difficult such an understanding must be when it eludes most of us even in understanding minds very much like our own. Shanahan sets out from a premising definition of intelligence as what it is that “measures an agent’s general ability to achieve goals in a wide range of environments”, from which we can infer a definition of consciousness as what it is when the measuring agent is the agent herself.

From there, Shanahan works up a spectrum of consciousness ranging from awareness through self-awareness, to empathy for other people and on to integrated cognition, wondering along the way if the displayed symptoms of consciousness might disguise distinctions in the internal experience of consciousness between biological and non-biological beings. The jury will remain out on the latter until Super AI is upon us, but reflections on the evolution of biological consciousness prompt another thought on the process of evolution itself.

There is nothing absolute about human consciousness. We are where we are now: our ancient ancestors might have gawped uncomprehendingly at the messages in White Rabbits, Lucy in the Sky with Diamonds and the rest of them, but the doors that were opened by the 60s counterculture were less about means than about ends. Enhanced consciousness was shown to be possible if not downright mind-blowing. We in our time can only gawp in wondrous anticipation of what future consciousness may tell us about all manner of things, including even and possibly especially dances about architecture.

“Teach me half the gladness that thy brain must know, Such harmonious madness From my lips would flow The world should listen then, as I am listening now.” — Shelley”s To a Skylark

Algorithms cannot really know you if you don’t

Count the number of times you notice stories about how the various engines of AI – the algorithms, the machine learning software burrowing ever deeper into the foibles of human behaviour – are getting “to know you better than you know yourself.” What started as variations on “You liked pepperoni pizza, here’s some more” evolved quickly into “People into pizza also love our chocolate ice cream” and on to special deals for discerning consumers of jazz, discounted National Trust membership, and deals on car insurance.

Emboldened as the human animal always is by early success, it was bound to be a small leap for the algo authors to progress beyond bemusing purchasing correlations to more immodest claims. Hence the boast of the data scientist cited in the online journal Real Life, claiming that the consumer is now being captured “as a location in taste space”.

Advocates for algorithms will smile with the rest of us at the anecdote about how the writer’s single imprudent purchase of a graphic novel inspired Amazon into morphing from the whimsy of a one-night stand into the sweating nightmarish stalker from hell; of course, they will claim that algorithms will only get better in inferring desires from behaviours, however seemingly complex.

The writer makes a very good case for doubting this, however, going into some detail on how the various dark promptings of his delight in Soviet cinema of the 1970s are unlikely to excite an algorithmic odyssey to the comedic treatments of pathological sadness in the original Bob Newhart Show.

And yet: the witty sadsack being as likely to emerge in Manhattan as in Moscow, it is not inconceivable that algorithms might evolve to a point of sifting through the frilly flotsams and whimsical whatevers of daily life in multiple dimensions of time and space, to home in on the essentially miserable git who is Everyman. But this is to assume some consistency of purpose to miserable gitness (indeed any manifestations of human behaviour), reckoning that there is no git so miserable that he ceases to know his own mind. And here, Aeon Magazine weighs in to good effect.

There are so many layerings of self and sub-self that implicit bias runs amok even when we run our self-perceptions through the interpretive filters of constantly morphing wishful thinking. So know yourself? It may be that only The Shadow Knows.

Not being a number does not make you a free man

Having listened last week to futurist Yuval Noah Harari talking at London’s Royal Society of Arts about his new book, Homo Deus, I am wondering how a conversation might go between Harari and The Prisoner. Next year will be the 50th anniversary since the iconic television series first broadcast what has become one of the catchcries of science fiction: “I am not a number: I am a free man!” Five decades on, the Guardian review of Harari’s book is sub-titled “How data will destroy human freedom”.

A fundamental difference between Harari’s hugely successful Sapiens and his new book is that the former involves reflections on how humanity has made it this far, whereas the new title is a speculation on the future. The former is rooted in memory; the latter involves conjectures that shift on the sands of uncertain definitions, as the above-linked Guardian review of Harari’s latest book reveals. “Now just hold on there” moments abound, as for example:

Evolutionary science teaches us that, in one sense, we are nothing but data-processing machines: we too are algorithms. By manipulating the data we can exercise mastery over our fate.”

Without having the peculiarities of that “one sense” explained, it is hard to absorb the meaning of words like “nothing”, “manipulating” and “mastery”. Words matter, of course, and there are perils attendant upon concluding too much about human identity through the links that are implicit in lazily assumed definitions.

What happens to the god-fearing woman when she discovers there is no God? Is the workingman bereft when there is no longer any work? If people refuse to accept the imprisonment of numbers assigned to them by other people, are they thus necessarily free? How much is freedom determined not by actions, but by thoughts? And critically: if our thinking is clearer and more careful, can we be more free?

In the Q&A that concluded the RSA event, Harari missed an opportunity when he was asked about the future prospects of education. What will we teach children in the data-driven future of Super Artificial Intelligence? Interestingly, neither maths nor sciences got a mention, and it seemed we might just have to see when the future arrives. But it must be true that a far higher standard in teaching reasoning and critical skills will be essential unless humanity would contemplate an eternal bedlam of making daisy chains and listening to Donovan.

Consciousness is not as hard as consensus

Any review of the current writings on consciousness turns up the idea, sooner or later, that it is no longer the hard problem that philosopher David Chalmers labelled it two decades ago. There has been a phenomenal amount of work done on it since, much of it by people who seem pretty clear, in fact, on what it means to them. What is clearly much harder is getting them to agree with one another.

One of the greater controversies of this year was occasioned by psychologist Robert Epstein, declaring in an article in aeon magazine entitled The Empty Brain that brains do not in fact “process” information, and the metaphor of brain as computer is bunk. He starts with an interesting premise, reviewing how throughout history human intelligence has been described in terms of the prevailing belief system or technology of the day.

He starts with God’s alleged infusion of spirit into human clay and progresses through the “humours” implicit in hydraulic engineering to the mechanisms of early clocks and gears to the revolutions in chemistry and electricity that finally gave way to the computer age. And now we talk of “downloading” brains? Epstein isn’t having it.

Within days of his article’s appearance came a ferocious blowback, exemplified by the self-confessed “rage” of software developer and neurobiologist Sergio Graziosi’s response, tellingly entitled “Robert Epstein’s Empty Essay”. The earlier failures of worldly metaphor should not entail that computer metaphors are similarly wrong-headed, and Graziosi provides a detailed and comprehensive review of the very real similarities, his anger only sometimes edging through. He also provides some useful links to other responses to Epstein’s piece, for people interested in sharing his rage.

For all this, the inherent weakness in metaphor remains: what is like, illuminates; but the dissimilarities obscure. The brain’s representations of the external world are not designed for accuracy, but are evolved for their hosts’ survival. Making this very point, in a more measured contribution to the debate, is neuroscientist Michael Graziano writing in The Atlantic. “A New Theory Explains How Consciousness Evolved”, tackles the not-so-hard problem from the perspective of evolutionary biology.

He describes how his Attention Schema Theory would be evolution’s answer to the surfeit of information flowing into the brain – too much to be, he says, “processed”. Perhaps, in the context of the Epstein/Graziosi dispute, he might better have said “assimilated”. Otherwise, full marks and thanks to Graziano.

Thinking about thinking, and multi-coloured hats

A sudden immersion in the world of end-of-term report cards brought me face to face last week with a note of my grand-daughter’s ability to “empathise with characters using red hat thinking”. Ignoring my own pedantic grimace at the syntactical implication that the application of “red hat thinking” might be a curriculum objective, I passed quickly through the bemusement that the thoughts of Edward de Bono have so passed into the vernacular of North London primary schools that they are referenced in lower case, and moved on to engage in a little blue-sky thinking of my own. How would a Super Artificial Intelligence (SAI) look wearing de Bono’s Six Thinking Hats?

A flick through the colour descriptions suggests that the thinking being considered has little to do with neurological processes or activities of mind, and is employed more in the colloquial sense of applying “new ways of thinking” that are lateral, or outside the box, or indeed even blue-sky. They betoken attitudes and at best establish distinctions that may be useful in achieving a result, getting the sale, appreciating an alternative point of view, or changing a mind.

So, in summary, we focus on the facts (white), think positive (yellow), pick holes (black), empathise (red), blue-sky (green?) and then consolidate the process (blue). While the metaphor gets a bit unwieldy towards the end – perhaps the blue sky should really be blue, and the process of fertilising and growing the end result should be green – It still leaves the question to play with: what would SAI do with these hats? After all, is it reasonable to suppose that if human intelligence evolved through a consciousness that manifested these attitudes, a machine intelligence might evolve in a similar way? And if it did, how would it get on with all this headgear?

Bearing in mind the extent to which AI starts from a programmed foundation for which no hats are required, in any sense, and evolves into SAI through an emerging capacity to enhance its own potency, it’s hard to see how any of these hats will matter except insofar as a programmed requirement to get along with humans is retained. The binary distinction of white and black would probably keep those hats in play. But in the link above, we note that the black hat (Judgement) is described as “probably the most powerful and useful . . . but a problem if overused.”

Emotion or reason: amygdala or frontal cortex?

Nobody reading the news can fail to notice the stresses being imposed upon rational thinking. On the day that the American Republican Convention concludes, a torrent of articles spews forth from the assembled journalists, of which this excellent piece in Salon is one of the better ones. All are variations pretty much on a single theme: if there is a collective modern mind, are we losing it?

The Salon piece references a second article, described as “truly terrifying” beyond what is also an electric and highly quotable stream of articulate acuity by British journalist Laurie Penny in The Guardian. In reporting another journalist breaking down in tears and exclaiming: “ . . . there’s so much hate . . . What is happening to this country,” Penny’s diagnosis, and one of her better lines among the many good ones, is that what we have is the natural result when “weaponised insincerity is applied to structured ignorance”.

Penny’s context is the brittle cynicism of the heartless Twittersphere, and the manipulation of the fearful, angry and dispossessed by people who must know better but don’t care. Her immediate context is the highly charged atmosphere of an American convention bear pit, never the most salubrious reflection of humanity in its cognitive finery. But she could as well have referenced the bluff demagogueries of politicians the world over, all contemptibly cashing in on terror, ignorance and want in the service of their grubby whimsies and self-imagined entitlements.

We must be wary of assuming too much, too far, and too soon about a future of SuperIntelligence or, more ludicrously, the dawning of a new age of convergent intelligence where the power of the human brain is augmented by so-called Artificial General Intelligence. Given the power of reason rendered truly Super by a necessarily reflective consciousness, we might expect any AGI worth its salt to ask of us:

With which human intelligence do you propose that we converge? Are we to be amazed at the brute twitchings of the human amygdala, driven by its primal urges and perpetually lurking tigers? Or is the frontal cortex that lifted you clear of the swamp of all those base appetites, now capable at last of getting at higher truths without deception? If neuroscience is understanding more about the reflective effects of aggression on the brain, can we relay this knowledge to the twitter trolls, the market grifters, and all those venal politicians?

Delusions of immortality

Although the Daily Mail is not a hotbed of deep thinking on the biology of aging and the potential of human immortality, the fact that it has produced a long feature on the digital uploading of brains is indicative of how the topics of human and artificial intelligence are becoming mainstream matters of interest. Typically, the article’s use of the terms “brain” and “mind” are pretty much interchangeable. There is no consideration of how any one person’s mind might be anything more than the animation of thoughts and feelings within the three pounds of blancmange that reside within our skulls.

Among the unchallenged and carelessly crafted assumptions driving this piece, the biggest is the absence of any reflection on the question of identity in considering the possibility of the immortality of the mind. In short, in what sense does Bill Bloke remain Bill Bloke when he is uploaded to the computer, or reconstituted through stem cell interventions on brains maintained on life support, or reanimated after some sort of cryonic limbo?

From all of these rapidly evolving technologies it is clear that something is going to emerge that is distinct from anything our world has to show now, even if marketing and wishful thinking will ensure that the early stages of this febrile new world are a grab bag of simulations, avatars, holograms, and downright hoaxes. But however many iterations of the real deal we evolve to throw in to that grab bag, nowhere in its darkness will we find good old Bill himself. Why is that?

One of our most significant cognitive biases is the apparently all-encompassing reality of the here and now: our brain perceives the world as a ticking and wholly immersive real thing from which our brains and our minds are separate phenomena. Except that they aren’t. Whatever the chances of the existence of parallel universes, the fact is that we are 7 billion people spinning along in this one, with a definite sense of July 2016ishness about this world we think we know. Geographic relocation or a short time in a coma can convey a sense of immense disorientation, but the times and places and people that collectively define us and shape our minds would all be absent from the new reality of the reconstituted Bill: awakened to a new place, a new time, and a new world of eternal bemusement.

Consciousness: scanning & thinking get us closer

There are lots of smart philosophers around and they don’t agree on a definition of consciousness. Daniel Dennett thinks it is an illusion: “our brains are actively fooling us”. It is nothing like an illusion for David Chalmers, who sees it as being so impenetrable to our understanding as to be THE Hard Problem of humanity – although not so hard as to be beyond joking about it: he has also characterised it as “that annoying time between naps”. More recently, in The New York Times, Galen Strawson was similarly light-hearted in analogizing Louis Armstrong’s response to a request for a definition of jazz: “If you gotta ask, you’ll never know.” More seriously, he offers the thought that we know what it is because “the having is the knowing”. While begging the question if this is equally true of cats and people, the musings of the Dennetts, Chalmers’ and Strawsons of the world make it clear that anyone who thinks that philosophy is dead is certainly insensible, if not downright nuts.

Perhaps what makes the problem hard is the attempt to define it from within it. To borrow from Strawson, can we understand consciousness better through examining the border between having it and not having it? What is the catalyst that tips it from not being into being? Research out of Yale and the University of Copenhagen may have nudged us closer to an answer. Using PET scanners to measure the metabolising of glucose in the brains of comatose patients, researchers were able to predict with 94% accuracy which ones would recover consciousness. It appears that the “level of cerebral energy necessary for a person to return to consciousness (is) 42%.” Bemused readers of The Hitchhiker’s Guide to the Galaxy will grasp the significance of the number 42 as the “Answer to the Ultimate Question of Life, the Universe, and Everything.”