Immortality and Man’s Seven Ages

When we tire of working on the basics of enhancing intelligence to practical ends, we always have little thought experiments to play with: riffs of wonderment on, say, a recipe for happiness that contains an ounce of eternal pure consciousness. An example of this phenomenon can be found in the current issue of The Atlantic. Its title gives the game away in its conclusion on the value of living forever. “Immortal but Damned to Hell on Earth” carries the chilling strapline: “The danger of uploading one’s consciousness to a computer without a suicide switch”.

Left unexamined is the impossibility of a “pure” consciousness, given that any claim to purity is sacrificed as soon as a consciousness actually becomes conscious of something. Not to mention that, as we discover repeatedly in literature, the vessel of that consciousness is itself constantly changing, taking on at any given moment a character as perceived by another consciousness that is conscious of it. In Shakespeare’s Seven Ages of Man, we see in each of those stages that the subject of the poem is seen in a particular light by some other person: the infant by his nurse, the lover by mistress, the soldier by his adversary, and so on.

What does this matter in the context of The Atlantic piece? In its musings on the effects for justice in uploading emergent evil entities to The Cloud, it extrapolates amusingly on humanity’s insatiable appetite for retribution. It wonders how creatively we might punish an uploaded consciousness for which multiple copies might be made, to facilitate multiple retributions. And how might this affect uploaded people who are pursued vindictively unto Doomsday, justly or not? Left unaddressed is this question: if some Bad Guy were uploaded only to be pursued eternally by his victims, in which of his Ages would his victims find him?

Clumsy metaphors obstruct clear understanding

Of all human phenomena that are taken for granted as fixed points at the street level of our understanding, it is hard to beat language. Look no further than the deeply ignorant and ahistorical view of The Bible as purportedly representing the “literal word of God” when two millennia of conflict, translations, version control issues and the cynical manipulation of human credulity have interceded between whatever happened on the road to Damascus and our world today.

In the time since St Paul, has our understanding of the universe moved on much? The urge to explain one phenomenon in the language of another encouraged people to see Newtonian physics in terms of the coolest technology of that time, comparing God to a watchmaker. From this metaphor emerged the teleological argument by design: a watch is a product of design and thus requires a designer; the universe is clearly more complicated than a watch and so requires a super-intelligent designer, a deity. Prompting this retort: why a watch; not a kangaroo? There is more of an organic link between a universe and a kangaroo than between a universe and a watch. The problem of course is that, on The Kangaroo model of creation, the universe would have emerged as the progeny of two other universes.

Our clumsy attempts to explain living tissue phenomena in the language of the cool technologies of evolving time have seen us move from watches through motors and rockets to computers, and our challenge in understanding intelligence becomes over-complicated if we lean too much on terms like brain hardware and software, hard-wiring, and neural re-booting. There is no clarity in metaphorical blind alleys.

Forget the on/off switch and mind the pedals

Nick Bostrom’s recent TED talk posited several interesting thoughts, among which one of the most quoted has been his suggestion that, once computers get to be smarter than humans, there will be no off switch. Without getting into definitional hair-splitting as to which computers and what do we mean by “smarter”, perhaps the most fundamental reply might be: has there ever been an off switch? Would the course of Newtonian physics have been different if someone had encouraged Isaac to put a lid on his musings about apples (as it will only open a can of worms)?

Even if today we should vote to flip the off switch, some decades ahead of The Singularity, would that succeed? Given humanity’s problematic record even with dimmer switches, maybe a more productive approach would lie in understanding and adapting humanity’s successes with Artificial Intelligence so far, and in distinguishing AI failures driven by human stewardship (think Hollywood narratives, drone missiles, rogue drug formulations) and those inherent in AI itself ( . . . ?).

Building on the successes of AI, and given the other existential challenges on humanity’s table – climate change, enfeebled antibiotics, nuclear proliferation, renewable energy sources, aging populations, (phew, wait, there’s more) incoming comets, our mortality and the comparatively slow rate of cognitive evolution – can we realistically suppose that Human Intelligence on its own will survive?

There is a pretty compelling argument for ignoring the ignition switch and stepping on the accelerator, keeping an eye on the brake and reflecting on how best to use both pedals judiciously as we hit the curves in the road. While we retain control of the vehicle, we should bear in mind that humanity is not in any sense a singularity itself: the challenge will lie in keeping the boy racers away from the controls.

Managing the runway to Doomsday

A full-spectrum analysis of the science of doomsdayology might see humanity’s deliberations divide broadly into three categories. At one end where ignorance and insanity fight it out for a place at the bottom of the table, the brimstone peddlers of Revelation and snake-oil salesmen of cults and corruption cry out, never calling the End of Days too soon for them not to profit in some way, but never so distant as to prevent the innervation of the credulous with the literal Fear of God.

At the other end, providing more than enough ballast for that intellectual vacuity, are the impressively serious academic institutes that will leave you marvelling at the cognitive compass of our species. These include but are not limited to the Centre for the Study of Existential Risk in Cambridge; the Future of Humanity Institute in Oxford; the geographically decentralised Global Catastrophic Risk Institute; and the Machine Intelligence Research Institute (MIRI) in California.

Midway, we have the crooked timber of humanity just goofing along, although perhaps inclining more to our first polarity when wondering if street lighting, polyphonic music or lighting rods might impede God’s will. More responsibly but without marked improvement in predicting the end of civilisation, we have the deliberators over the Industrial Revolution and the advent of the railway; the physicists who worried that nuclear explosions would ignite the atmosphere, or the Hadron Collider cleave the planet in two. Then there’s recombinant DNA, Y2K, genetically modified food, global warming and, bringing us up to date, killer robots. And have fun spotting the ones that really are existential threats.

Maybe the answer lies in recombinant AI&HI, on the basis that HI will only get so far with academic reflections, bereft yet of the revelations of AI yet to come.

Bridging the gap between the trivial and the timeless

A thoroughly engaging article on the Transhumantech website poses ten questions of a sort not often seen in places where our smartest AI devices and search engines get interrogated. While it has edged ahead of human subjects in distinguishing feline faces within a herd of cats, AI still gets a free pass while we wrestle with questions like:  Was the cosmos made for us, and should we take responsibility for it? Can we understand everything (or indeed anything)? And the Big Daddy question from the Department of Chickens and Eggs: does reality beget consciousness, or is it the other way around?

This cosmic soup of relativity and ambivalence succeeds where all such confections succeed: by posing more fascinating questions. If nothing can exist before being perceived by a perceiver, can intelligence itself exist? And is any existential state affected by being perceived by an intelligence that is “only” artificial? What might AI make of the question if the cosmos was designed to accommodate AI? And if real intelligence were simply a function of ramped-up computation, would AI’s answers to any existential questions be ten times as insightful when its computational strength had itself increased by a factor of ten?

Possession of the answers to these questions may depend upon a combination of computation and consciousness that is for now beyond either human or artificial intelligence. Humans may never have enough of the former; AI may never have the latter, might not need it, and will probably care less. Within that pulse in the eternal mind when humans can meaningfully catch AI’s attention as it rushes at becoming the universe, it may pause only to remind us of Douglas Adams’ puddle.

Plan for anything in war, get chaos

A key issue discussed at the April conference in Geneva on Lethal Autonomous Weapons Systems (LAWS – irony alert) was the definitional question of “autonomy”. Those in favour of maximum autonomy for AI-driven weapons systems are the same sort of people who have always been beguiled by the boasts of technology, talking of precision bombing and surgical strikes as if the clinical talk of success achieved in lab conditions can be replicated within the chaotic terror of a warzone.

Consider the insane absurdity of bringing the phenomena of war and Artificial Intelligence into the same sentence. After all, if we worry about the potential of robots that could kill, why would we be designing robots to do precisely that and then get all long-faced serious about whether this killing should be inhibited in the least by human oversight? It’s not even as if such oversight would guarantee much of anything, as it is humans that will have created that warzone in the first place and then put the robots into it. History offers some hope, however.

An interesting precedent in considering the benefits of a little human oversight was the moment in 1962 during the Cuban missile crisis when Vasily Arkhipov, a Russian submariner, stood up to his colleagues and the demands of operational protocol and refused to launch the ballistic missile that could well have provoked a nuclear exchange and possibly World War III. Had the Soviet B-59 submarine been operating autonomously as we all too complacently define the term now, it is easy to see how catastrophically different the intervening half century might have been.

Intelligence is more than the commoditisation of data

There’s a common presumption in much of the writing on AI that intelligence is primarily about computation and crunching data. In the words of the second link mentioned in Lively Stuff below, what “makes us smart” is the troika of activities identified as sensing, reasoning and communicating. And anything the machines cannot do is simply because computing power is insufficiently aggregated. For now.

Perhaps we debase ourselves and limit the terms of a potentially engaging debate if we default to equating intelligence with mere data and the power to crunch it. It is one thing to understand that by overlaying postcode data with a consumer’s smart phone GPS, we can discover the location of the nearest fast food outlet to relieve a craving for food. It is another thing entirely to understand the distinction between a search engine’s definition of the word “church”, for example, and the somewhat deeper and more humane, and moving, essence that is teased out by the poet addressing other cravings.

Google tells us that a church is, primarily, “a building used for public Christian worship”. Philip Larkin wraps one of his finest poems around a few more reflective definitions: “ . . . this cross of ground (that) held unspilt so long and equably what since is found only in separation – marriage, and birth, and death” and thus, a ground that is “proper to grow wise in, if only that so many dead lie round”.

While we can imagine machines replicating Larkin before monkeys do Shakespeare, who would bet that the intelligence needed to conceive of the poem in the first place, processing experience, emotion and imagination along with all that data, will ever be managed by AI? The wisdom of poetry does not lie in computation.

Getting inside heads is getting to be Big Business

Stories are hitting the media every day now on the subject of science monitoring brain activity in order to influence human thought and action. Two stories in particular today feature ways in which popular awareness of brain science is becoming more pervasive, from a short TEDTalk in Vancouver in March that demonstrated how one person’s brain can move another person’s arm; to this Reuters report on research by SharpBrains into an explosion in the number of neurotechnology patents.

From brain scans in pursuit of consumer research data, and means of alleviating depression or for enhancing vision; through to non-medical applications in the worlds of video gaming and home entertainment: we are getting far more interested in the relationship between what’s happening inside our heads and how that can influence, or be influenced by, what’s going on in the external world. It appears almost as if there’s a neuro-equivalent of Moore’s Law at work, except that the number of US patents for products characterised as “neurotechnological” is doubling every four years, rather than every two. From about 400 patents annually a decade ago, the number had increased to 800 by 2010 and then doubled again by last year to 1,600. What seems to be driving a sense that a tsunami is developing is more than just bigger numbers, but also the quality of the patents in terms of the number of other patents that reference them.

As ever in the history of human innovation, a key challenge will lie in distinguishing between patents that empower people to live better lives, as against those that essentially facilitate the control of people by other people.

Singularity by design or by default?

One of the most fascinating aspects of our journey towards The Singularity is that there is nothing singular about the journey itself. Consider this week’s small story announcing the withdrawal of Ford from developing smart technology for sensing heart difficulties in car users; it seems that the pace of wearable technologies is superseding the diagnostic intelligence of in-car devices. The market speaks.

We may be able to conceive of a singular moment when the capacity of machine-driven intelligence exceeds the intelligence of our species. What is most obvious from our current perspective is that neither are we progressing along a single path in reaching that point, nor is there any unique focusing of will in securing it. Indeed, much of what emanates from our collective passage towards this new dawn is characterised by dissonance, contrariness, misaligned objectives, overlapping loyalties, compromises and venality, market competition, the vagaries of chance and, naturally, the usual human soup of nobility, greed, insecurity and love.

And if we think that all this can be orchestrated to some common, if not higher, purpose then clearly we have not been paying attention. Kant’s “crooked timber of humanity” will always create semblances of order out of chaos and these may, with something like the luck leavened with the occasional insight that created them, survive for 20 minutes, if not quite for a thousand years. But as machines gain potency beyond merely achieving their programmed objectives, an ironic outcome may be a diminishing impact of chaos theory on human affairs.

The question for now, but not for much longer, is how highly do we value predictability if the currency of its acquisition is a loss of control?

Ethics and brain stimulation

Buried in today’s news is a story, fascinating on several levels, about a research team at the University of North Carolina School of Medicine who have provided evidence of electrical stimulation of the brain’s natural alpha wave oscillations and the consequent impact upon creative thinking. Their study suggested a boost in creativity of 7.4% in healthy adults, applying the Torrance Test of Creative Thinking, one of a small number of standard industry tests for creativity.

While it is the boost to creativity that attracts the headline interest, it appears that the short-term applications of this science will focus more on relieving depression and other psychological problems where a non-invasive and relatively inexpensive treatment such as alpha oscillation enhancement may prove useful. Of course, the research team leader acknowledges the interest that will be enlivened in people aiming to boost their creativity but he counsels caution. He cites longer-term safety concerns with a protocol that is still in its early stages, which seems fair enough; but he adds that he also has “ . . . strong ethical concerns about cognitive enhancement for healthy adults, just as sports fans might have concerns about athletic enhancement through the use of performance-enhancing drugs.”

This is interesting. We have always worried about potential harm in using chemical boosters of all sorts, but this has not spilled over into marking down achievements that have been wind-assisted by waftings from magic mushrooms. A de Kooning painting is what it is, as even more gloriously is Sgt Pepper. And the comparison with sports is false as, unlike the zero sum game of a contest where the success of a competitor comes at the expense of an opponent, the boosting of creativity is a much more nuanced reflection of a balance between creator and a wider creation.