Tag Archives: Dreams

Ha ha bloody ha, AI is getting into scary

A feature on Motherboard (and available from quite a few sites on this particularly frightening day in the calendar) informs readers that “MIT is teaching AI to Scare Us”. Well that’s just great. Anyone insufficiently nervous anyway about the potential perils of AI itself, or not already rendered catatonic in anxiety over the conniptions of the American election, can go onto the specially confected Nightmare Machine website and consult a specially prepared timeline that advances from the Celtic stirrings of Hallowe’en two millennia ago to this very year in which AI-boosted “hell itself breathes out contagion to the world.”

The highlight – or murky darkfest – feature of the site is the interactive gallery of faces and places, concocted and continually refined by algorithms seeking to define the essence of scary. So much of what we sense about horror is rather like our sense of what it is that makes humour funny: it is less induced from core principles but is rather deduced from whatever succeeds in eliciting the scream of terror or laughter. It cannot be a surprise, therefore, that the Nightmare Machine mission is proving perfect for machine learning to get its artificial fangs into. Website visitors rank the images for scariness and, the theory goes, the images get scarier.

Another school of thought, reflected in articles like this piece in Salon on the creepy clown phenomenon, sees the fright not so much in what others find frightening as in what serves as a projection of our own internal terrors. The clowns and gargoyles that stalk the political landscape are to a large extent projections of ourselves: of our own deepest fears for the more empathetic among us, or as simple avenging avatars for the morally bereft or culturally dispossessed.

When AI moves beyond its current picture recognition capabilities into deeper areas of understanding our own inner fears and darkest thoughts, the ultimate fright will no longer lie in some collection of pixels. It will seep from the knowing look you get from your android doppelganger — to all intents and purposes you to the very life — as your watching friend asks, “Which you is actually you?” Your friend doesn’t know, but you know, and of course it knows . . . and it knows that you know that it knows . . .

How does consciousness evaluate itself?

If “writing about music is like dancing about architecture”, perhaps the attempt to reflect conclusively on consciousness is like the old picture of Baron Munchausen trying to pull himself out of a swamp by his own pigtail. Despite the usual carpings in the commentary whenever any serious thinking is done online (gosh, if only the author had consulted with me first . . .) an article in Aeon Magazine by cognitive robotics professor Murray Shanahan of London’s Imperial College makes some important distinctions between human consciousness and what he terms “conscious exotica”. The key question he poses is summed up in the sub-headline: “From algorithms to aliens, could humans ever understand minds that are radically unlike our own?”

It’s a great question, even without wondering how much more difficult such an understanding must be when it eludes most of us even in understanding minds very much like our own. Shanahan sets out from a premising definition of intelligence as what it is that “measures an agent’s general ability to achieve goals in a wide range of environments”, from which we can infer a definition of consciousness as what it is when the measuring agent is the agent herself.

From there, Shanahan works up a spectrum of consciousness ranging from awareness through self-awareness, to empathy for other people and on to integrated cognition, wondering along the way if the displayed symptoms of consciousness might disguise distinctions in the internal experience of consciousness between biological and non-biological beings. The jury will remain out on the latter until Super AI is upon us, but reflections on the evolution of biological consciousness prompt another thought on the process of evolution itself.

There is nothing absolute about human consciousness. We are where we are now: our ancient ancestors might have gawped uncomprehendingly at the messages in White Rabbits, Lucy in the Sky with Diamonds and the rest of them, but the doors that were opened by the 60s counterculture were less about means than about ends. Enhanced consciousness was shown to be possible if not downright mind-blowing. We in our time can only gawp in wondrous anticipation of what future consciousness may tell us about all manner of things, including even and possibly especially dances about architecture.

“Teach me half the gladness that thy brain must know, Such harmonious madness From my lips would flow The world should listen then, as I am listening now.” — Shelley”s To a Skylark

Collaboration is the new competitive advantage

For years if not for decades, the buzzword in the worlds of innovation and enterprise, echoing through the lecture theatres of business schools and splattered across white boards in investment fund and marketing companies across the globe, has been disruption. Given the evolving world of exponential growth in computing power, artificial intelligence and deep learning, it may well be that the status of has-been is what this awful word “disruption” will soon occupy. Of course it will still exist, but as a by-product of unprecedented advances rather than a reverently regarded target of some small-minded zero-sum game.

Consider an article that appeared a few days ago on the Forbes website, asking rhetorically if the world-shifting changes that humanity requires, and to a large extent can reasonably expect to see, are likely to be achieved with a mind-set that is calibrated to “disrupting” existing markets. For each of the described challenges – climate change, energy storage, chronic diseases like cancer, and the vast horizons of potential in genomics, biosciences, and immunotherapies – collaboration rather than competition is emerging as the default setting for humanity’s operating system.

Mental models based upon cooperative networks will begin replacing the competitive hierarchies that only just managed to keep the wheels on the capitalist engine as it clattered at quickening speeds through our Industrial age, lurching from financial crisis to military conflict and back again, enshrining inequalities as it went and promoting degradations of Earth’s eco-system along the way. And why will things change?

Of course it would be nice to think that our species is at last articulating a more sustaining moral sense, but it won’t be that. It will simply be that the explosion of data, insatiable demands upon our attention, and the febrile anxieties of dealing with the bright white noise of modern life will render our individual primate brains incapable of disrupting anything remarkable to anything like useful effect.

The Forbes article concludes with admiration for what the digital age has been able to achieve, at least as long as the efficacy of Moore’s Law endured. It is soon to be superseded, however, by the emerging powers of quantum and Neuromorphic computing, with the consequent explosion of processing efficiency that will take our collective capabilities for learning and for thinking far beyond the imaginings of our ancient grassland ancestors.

Working together we will dream bigger, and disrupt far less than we create.

Consciousness is not as hard as consensus

Any review of the current writings on consciousness turns up the idea, sooner or later, that it is no longer the hard problem that philosopher David Chalmers labelled it two decades ago. There has been a phenomenal amount of work done on it since, much of it by people who seem pretty clear, in fact, on what it means to them. What is clearly much harder is getting them to agree with one another.

One of the greater controversies of this year was occasioned by psychologist Robert Epstein, declaring in an article in aeon magazine entitled The Empty Brain that brains do not in fact “process” information, and the metaphor of brain as computer is bunk. He starts with an interesting premise, reviewing how throughout history human intelligence has been described in terms of the prevailing belief system or technology of the day.

He starts with God’s alleged infusion of spirit into human clay and progresses through the “humours” implicit in hydraulic engineering to the mechanisms of early clocks and gears to the revolutions in chemistry and electricity that finally gave way to the computer age. And now we talk of “downloading” brains? Epstein isn’t having it.

Within days of his article’s appearance came a ferocious blowback, exemplified by the self-confessed “rage” of software developer and neurobiologist Sergio Graziosi’s response, tellingly entitled “Robert Epstein’s Empty Essay”. The earlier failures of worldly metaphor should not entail that computer metaphors are similarly wrong-headed, and Graziosi provides a detailed and comprehensive review of the very real similarities, his anger only sometimes edging through. He also provides some useful links to other responses to Epstein’s piece, for people interested in sharing his rage.

For all this, the inherent weakness in metaphor remains: what is like, illuminates; but the dissimilarities obscure. The brain’s representations of the external world are not designed for accuracy, but are evolved for their hosts’ survival. Making this very point, in a more measured contribution to the debate, is neuroscientist Michael Graziano writing in The Atlantic. “A New Theory Explains How Consciousness Evolved”, tackles the not-so-hard problem from the perspective of evolutionary biology.

He describes how his Attention Schema Theory would be evolution’s answer to the surfeit of information flowing into the brain – too much to be, he says, “processed”. Perhaps, in the context of the Epstein/Graziosi dispute, he might better have said “assimilated”. Otherwise, full marks and thanks to Graziano.

AI, lucid dreams and electric bananas

As we develop our ideas about brain science and machine learning, it’s easy to sense that the limitations of language confuse talk of human and machine intelligence as if they were the same. That they are not is most obvious when we talk about dreams.

An article appeared on the Slate website last summer, asking: “Do Androids Dream of Electric Bananas?” More intriguing was the strapline: “Google’s DeepDream is dazzling, druggy, and creepy. It’s also the future of AI”. What almost inspired a BAMblog at the time was a quoted comment mistakenly describing DeepDream’s creative output as “kitsch” – an aesthetic judgement – when it more properly should be described as derivative – a functional judgement. The resulting “So what?” being left unanswered, the blog went unwritten and the idea was put to sleep.

A small riot of stories over the past couple of weeks have revived the idea, however, while illustrating how advances in neuro-imaging are enhancing our understanding of the correlations between stimulus and response in the human brain. We learn of the semantic atlas at the University of California at Berkeley, mapping the parts of the brain that react to words and ideas, and we read about the study at Imperial College, London of the “Brain on LSD”, with its intriguing sidebar on the enhancement of the experience when music is added to the mix.

Coming along as a companion to the Imperial story, a piece on Psychology Today reveals “How to have a mystical experience”, suggesting that the conscious release of analytic control to our emotionally-driven limbic promptings can induce a sense of consciousness without boundaries, cosmic unity, and transcendence . . . maa-an. Topping it all is no less eminent an authority than a BuzzFeed listicle with “15 lucid dreaming facts that will make you question reality”. Interesting but unlikely, that; although it serves to remind us that as creatures who can think about our thinking and be conscious that we are dreaming, we are doing things that AI cannot do. Yet.