There are lots of smart philosophers around and they don’t agree on a definition of consciousness. Daniel Dennett thinks it is an illusion: “our brains are actively fooling us”. It is nothing like an illusion for David Chalmers, who sees it as being so impenetrable to our understanding as to be THE Hard Problem of humanity – although not so hard as to be beyond joking about it: he has also characterised it as “that annoying time between naps”. More recently, in The New York Times, Galen Strawson was similarly light-hearted in analogizing Louis Armstrong’s response to a request for a definition of jazz: “If you gotta ask, you’ll never know.” More seriously, he offers the thought that we know what it is because “the having is the knowing”. While begging the question if this is equally true of cats and people, the musings of the Dennetts, Chalmers’ and Strawsons of the world make it clear that anyone who thinks that philosophy is dead is certainly insensible, if not downright nuts.
Perhaps what makes the problem hard is the attempt to define it from within it. To borrow from Strawson, can we understand consciousness better through examining the border between having it and not having it? What is the catalyst that tips it from not being into being? Research out of Yale and the University of Copenhagen may have nudged us closer to an answer. Using PET scanners to measure the metabolising of glucose in the brains of comatose patients, researchers were able to predict with 94% accuracy which ones would recover consciousness. It appears that the “level of cerebral energy necessary for a person to return to consciousness (is) 42%.” Bemused readers of The Hitchhiker’s Guide to the Galaxy will grasp the significance of the number 42 as the “Answer to the Ultimate Question of Life, the Universe, and Everything.”
What would a politically correct artificial intelligence be like? Presumably the AI would need to be of the “Super” variety – SAI –to accommodate the myriad nuances of emotional intelligence implicit in PC behaviour. And if that behaviour had not been programmed into it, would it evolve naturally as a function of what intelligence is and does; or would it emerge out of the kind of emotional manipulation that has characterised the growth of PC among humanity? Maybe it would just back up early on and refuse to “respect” our feelings about its activities. It could reject the impertinence of a species that regards respect for feelings as critical to any definition of real intelligence, but then subverts that respect through cynical manipulations and an overarching need to be seen to be winning arguments rather than getting at the truth of something.
It is a cornerstone of human foibles that we are forgiving of our multiple biases despite our general ignorance of what those biases are. Add into this mix our capacities for wishful thinking, catastrophising, and feelings-based reasoning, and we may well wonder if we are indulging ourselves in building these foibles into our AIs to afford them equal opportunities for “emotional growth”. Perhaps, rather, we should allow for the limiting scope these foibles offer in the quest for truth and the optimal means of applying what we know to the challenges of life in the universe. We would then just say: nah, let’s leave out all the feely, human bits. What we want is SAI, pure and simple – Oscar Wilde notwithstanding.
In the tsunami of articles welling up on the subject of human intelligence, two that have most impressed in recent years appeared in The Atlantic: “Is Google Making Students Stupid?” (Summer 2014), and “The Coddling of the American Mind” (September 2015). The first examined our relationship with evolving technologies; the latter struck at the heart of the internal relationship between our analytical and emotional selves. Both honed in on a truth that will be vital to humanity whatever AI should do, purposefully or by mindless accident. We will sell ourselves catastrophically short if, having developed an impressive toolkit for thinking critically about the world, we down those tools and risk letting the world go.
An interesting few weeks of Atheism in the News suggests that this is a good time to be taking its temperature, and seeing what implications there are for the progress of human knowledge that things now are where they are. And where are they? Three stories over the past seven days suggest that renowned atheist Christopher Hitchens may have had a change-of-mind on his deathbed; comedian contrarians Bill Maher and Michael Moore are contemplating a documentary film to be called “The Kings of Atheism” featuring well-known comedians on a stand-up tour of the American Bible Belt; and a $2.2 million donation will endow the USA’s first academic chair for the “study of atheism, humanism and secular ethics” at the University of Miami.
The first story is manipulative nonsense. Anyone who knew Hitchens or read him on religion will appreciate that the wit he exercised in his study of the science of belief will survive him for any realistic definition of eternity. He consolidated “what oft was thought, but ne’er so well expressed” and with such power as no tawdry revisionism can undermine. If we accept a central thesis of his thinking, that religious devotion inhibits the critical faculties of our species and puts at risk its cognitive evolution, there is a lot of work still to be done. That work cannot include indulging wishful thinking sustained by confirmation bias, or the spectacle of a man advertising himself as Hitchens’ friend making a sacrifice of that friendship on the altar of his greed.
The film idea? It must suggest some evolution of our species that within just a few centuries of heretics being flayed and burned for blasphemy, such a project might be announced on national television. But we might still think: good luck with that.
Most engaging is the question of the culture of the newly endowed chair. Will its terms of atheistic reference be reactive and obsessed with the old perception of atheists as humourless human husks with neither morals nor magic? Or will something emerge of the humanity that might have evolved if science had taken hold sooner, superseding religion with its hectoring certainty that, in Hitchens’ memorable words, “if you will abandon your critical faculties, a world of idiotic bliss can be yours.”
At the moment of “Singularity” when the collective cognitive capability of AI matches, exceeds and then rapidly surpasses that of humanity itself, it may be by then too late to worry about what has been forecast as our surrender of the keys to the universe. Looking at what we have done as a species in seizing and exploiting control of the planet based upon our superior intelligence – and possibly fuelled by something of a guilty conscience – we can all too easily imagine what might occur if some other intelligent phenomenon were to emerge as not only our superior but, with an exponential capability to expand and deepen its capabilities, able to leave us in its dust. What, we might imagine, would we do with such power?
A couple of stories popped up on the social and political commentary site Salon this week, painting a pretty stark picture of the havoc that can be played with the abuse of information and communications technology in the world as it is today, without worrying about what tomorrow’s machines might get up to. It appears that people running today’s machines are wreaking worrying enough chaos as it is. “Could Google results change an election?” asks one, playing on a television fictional treatment of the political manipulation of search engine results to steal an election. On the same day, another Salon piece by the excellent Patrick L Smith “pulls back the curtain on how (American) foreign policy is created – and sold to willing media dupes.”
Human psychology has long been a staple of marketing, behavioural, and political manipulation and we are learning more and more about cognitive biases and their impact on our notions of free will and identity. Have a look at this article, and this one: their cumulative effect being that we must be wary of the idea of our brains as the rational director-generals of our waking selves. Then along comes AI, with its interests in creating algorithms that can subtly direct us in our searches for products, services, candidates and causes: and we don’t have to await the Singularity before we start worrying about the downside of a hybrid human/machine intelligence.
There’s much to beguile in the story of the teenager in Canada who thinks he may have identified a lost Mayan city by correlating locations of known ancient sites with the patterns of star constellations. A specific star for which there was no jungle marking as a correlate inspired him to approach the Canadian Space Agency for more detailed satellite images of a particularly remote part of the Mayan wilderness. And there it was: a collection of geometric structures discernible beneath the jungle canopy. Now he is keen that an archaeological expedition might prove his theory that what he has spotted is a lost Mayan city for which he has already conceived a name.
So far, so laudable, so inspiring: and so noticeable that the dozens of media reports of this story (the link above is one exception) present the details uncritically. Did the Mayans have the technology to notice in their night sky what we can see a thousand years later? Would they have thought of laying out their cities to match constellation patterns? Would the jungle topography have enabled it? Are there other explanations for the geometric patterns the boy discerned? And yet could it all still be true?
Humanity’s “thinking” on cosmology has a history of weaving whole belief systems out of wonders and wishful thinking, so we can accept that the benefits of the scientific method come at the price of some snarky pushbacks when “real” astronomers were asked to comment on the activities of the Canadian boy wonder.
But the real thrill in the story is not in presuming beyond simple pattern recognition into the realm of interpretations that may stretch credulity. At the same time as the Mayan story broke, thousands of youngsters were thrilling to the wonders of science at the Imperial Festival, intrigued by the disciplines and inspirations that have shaped the world of learning over the last few centuries. Any students reading of the Mayan story had a perfect learning opportunity to set out and then test an exciting proposition, taking any wilder claims equally with the choirs of praise and the snarky commentary – all with a Big Dipper of salt. And then get back to their telescopes.
In a blogspot with some pertinent links, author and “Occasional CEO” Eric Schultz wonders what will come first. Will the offspring of capitalism wedded to human nature lead to the extinction of our species, largely through the agency of climate change? Or will the exponential capabilities of Artificial Intelligence – ironically itself powered by the appetites of capitalism – see the conception and implementation of solutions to the challenges arising from climate change before we can incinerate ourselves? Put baldly, can advances in renewable energies, CO2-absorbing moss and desalination technologies counterbalance the excesses of consumerism, the profit motive and cognitive denial? Can AI make up for the deficits in human intelligence?
It makes for an amusing read and does no harm to its instructional impact that this blog turns deliberately on two key target years. In 2040, according to Nick Bostrom’s famous TEDTalk of last year, we will hit a tipping point in the measuring of artificial against human intelligence, beyond which moment – and this is the thrust of Bostrom’s message – the capabilities of AI as against human brainpower vanish off over the horizon. Within a few pulses of the eternal mind, our problems are all over, including climate change and possibly even the potential dangers of rampant AI.
Critically, the second target date in the Schultz blog is 2041, the year forecast as the focusing point of the future speculations in The Collapse of Western Civilization, a work of “science-based fiction” examining a species slow-roasting itself into oblivion. Its authors are clearly not the optimists that Bostrom is, and that Schultz may be.
Common to blogspot, book, and TEDTalk is the idea of mitigation. Through complex evolvings of nuance, increment, compromise and inspiration, these dates could move forward or backward in time. All the while, the crooked timber of humanity will muddle forward in denial about its responsibility in resolving the tragedy of the commons, pathetically hoping that a smart computer will solve its problem for it.
As we develop our ideas about brain science and machine learning, it’s easy to sense that the limitations of language confuse talk of human and machine intelligence as if they were the same. That they are not is most obvious when we talk about dreams.
An article appeared on the Slate website last summer, asking: “Do Androids Dream of Electric Bananas?” More intriguing was the strapline: “Google’s DeepDream is dazzling, druggy, and creepy. It’s also the future of AI”. What almost inspired a BAMblog at the time was a quoted comment mistakenly describing DeepDream’s creative output as “kitsch” – an aesthetic judgement – when it more properly should be described as derivative – a functional judgement. The resulting “So what?” being left unanswered, the blog went unwritten and the idea was put to sleep.
A small riot of stories over the past couple of weeks have revived the idea, however, while illustrating how advances in neuro-imaging are enhancing our understanding of the correlations between stimulus and response in the human brain. We learn of the semantic atlas at the University of California at Berkeley, mapping the parts of the brain that react to words and ideas, and we read about the study at Imperial College, London of the “Brain on LSD”, with its intriguing sidebar on the enhancement of the experience when music is added to the mix.
Coming along as a companion to the Imperial story, a piece on Psychology Today reveals “How to have a mystical experience”, suggesting that the conscious release of analytic control to our emotionally-driven limbic promptings can induce a sense of consciousness without boundaries, cosmic unity, and transcendence . . . maa-an. Topping it all is no less eminent an authority than a BuzzFeed listicle with “15 lucid dreaming facts that will make you question reality”. Interesting but unlikely, that; although it serves to remind us that as creatures who can think about our thinking and be conscious that we are dreaming, we are doing things that AI cannot do. Yet.