Reaction to risk: rationality or Rapture?

A thoughtful review on the website of the Institute for Ethics & Emerging Technologies (IEET) considers a recently published book by Phil Torres: “The End: What Science and Religion Tell Us About the Apocalypse”. A useful distinction is made between religious and secular eschatology. The latter considers these threats to the planet and to humanity from a rational and evidence-based perspective, seeing phenomena such as nuclear war, bio-engineered pandemics, and any malign Superintelligence as things to be avoided.

Religious eschatology, on the other hand, might involve all, some or none of these risks, whether or not in common with other large and lesser threats. But the key lies not in the actual risks, which are seen only as means to a greater end. It is the End Times itself that is the point of “God’s coming judgement and destruction”. Evidence-based rationality features somewhat less in deliberations on this side of the nut-house wall. What is made clear here is that, facing the prospect of humanity’s extinction, the cry of those in anticipation of their delivery into Eternal Life is “Bring it On”. These are not the people we want anywhere close to the nuclear codes, or indeed to any seats of influence.

Responsible citizens of Spaceship Earth will want to keep in touch with what Torres calls the secular eschatologists, working at places like the Centre for the Study of Existential Risk at the University of Cambridge; Nick Bostrom’s Future of Humanity Institute at the University of Oxford; the Future of Life Institute in Cambridge, Massachusetts; and the geographically decentralized Global Catastrophic Risk Institute. Their mission is to safeguard the prospect that a second, non-cataclysmic Big Bang might enable a benign Superintelligence, and not a stellar void of wasted talent, lost opportunities, and silence.

Oy Tay, our little AIs need better parenting

Thanks to a recent foray by Microsoft into that dark part of humanity’s forest where live The Trolls, last week served up a stark lesson in how to build an AI bot with help from the community on whom the bot’s personality is based. In fact, Bot Tay has taught us lessons, plural: first, as Microsoft watched their Little Red Riding Hood bot wander into the Internet forest and appear less than 24 hours later gibbering like some red-eyed Nazi Godzilla, the company acquired a greater sensitivity to context. And second, the rest of us have more respect now for the ethical deliberations that will be necessary in building AIs that don’t turn on their creators. Yes: inoculate AIs with controlled doses of Trolls’ Disease but don’t let them believe that Trolls R Us.

Of the tsunami of web commentary inspired by the chastening of Microsoft, two of the best articles appeared in The Daily Beast, irreverent as ever – the poor innocent bot “was never told that just one robot cigarette can lead to robot heroin addiction, and it cost her her stupid robot life”; and in Forbes, which made the same sort of point in a somewhat more circumspect way. Street smarts is about more than bits of data, and emotional intelligence will find its equivalent in machine learning when the bots can accrue impressions from their early experiences of deceit, ignorance, venality and hypocrisy and explain what this means in the context of the whole that is greater than the sum of the parts.

Tay was thrown to the wolves. “Her” human equivalent will have been inoculated against the terrors of the dark forest by a thousand bedtime stories finished off with a thousand hugs, and each night the light of reason left on in the hall outside. And as any of us who have participated in that ancient ritual will know, even then . . .

Hinton after AlphaGo reflects on Big Numbers

Among the various commentaries on the AlphaGo victory earlier this month, a real coup was registered by Canada’s Maclean’s Magazine in securing an interview with Geoffrey Hinton. Generally regarded as the godfather of deep learning, Hinton is especially well-placed to see the AlphaGo victory as an advance for neural networks — adaptive learning systems that mimic the working of the human brain in spotting patterns, intuiting rules, and projecting conclusions far beyond the powers of standard computer programming.

Once established, a neural network effectively practices with and “thinks” for itself. It’s rather like the baby that seems to be lying there, absently playing with its toes: it’s actually working out how language works. Once the Deep Mind programmers had established AlphaGo’s grasp of the rules and strategies of the ancient board game, it could be left to its infinite workings out of the trillions of move permutations. It is said that when GO sat down with human champion Lee Seedol, it had played over 30 million games, mostly with itself. The Korean is estimated to have played some few thousand games in his 33 years. It is amazing he won the one game out of the five.

 

But Hinton goes deeper into the numbers and, along the way, shows how neural networks of both the human and the machine variety are wondrous phenomena in their different ways. Evolution has enabled our brains to develop a much vaster universe of connections, or synapses, within their neural networks of cells – outnumbering the machine equivalent by a factor of a million. And the AlphaGo’s computations would have consumed hundreds of kilowatts of power, as against Sedol’s getting by on 30 watts, just about what it takes to power a lightbulb.

So the potential in consolidating the best of both types of neural networks into one super intelligent thinking system hints at a future far beyond our current imaginings.

Less kvetching, more asking “So what?”

An article at TechRepublic, “The 7 biggest myths about artificial intelligence” prompts several questions, from the grumpy — “How clumsy can listicles get? Whaddya mean “Biggest!?” — to the more compelling and profound. Among the latter, we might ask: do we spend too much energy asking if robots are going to take our jobs; if AI can be intelligent without possessing consciousness; and even the most hubristic question of all: could AI replace God? Meanwhile, little attention goes to asking “So what?”

Not “so what” in the sense of “mehh, boring” or “suck it up, dude, it’s inevitable”, but in the spirit of “so what can be done to benefit from the incoming tide that can’t be turned back?” Are we so confident in the perfection of human existence that there’s no point in looking at how wealth is created and its benefits distributed; to ask of any outcome if its genesis is compromised if its process was not self-aware; or to re-examine the presumptions of religions to represent some spiritual endpoint?

This is about more than fatalist kvetching or post-rationalising the genies’ irrevocable escape from their bottles. Curiosity in both sciences and arts is often predicated on the contemplation of the previously unthinkable. Being conscious of thanking God for the fruits of our labours is neither in its parts nor its whole an absolute or historically necessary alchemy in any species’ pursuit of existence. It’s just the way that things evolved for us. And as a feature of our intelligence, we will continue to evolve.

Could genius be genetically engineered?

A fascinating discussion is bound to develop around this topic when your chosen panel includes anthropologist/psychologist Stephen Pinker and two professorial colleagues such as Dalton Conley (social sciences) and Stephen Hsu (theoretical physics). The resulting video, filmed last week at New York City’s 92nd Street Y, would have been more streamlined than its one hour+ if the moderator had actually moderated rather than using the panel as a skittles run for his own theorising, but some key reflections couldn’t help bursting through.

Even the specialist panellists have been surprised at the pace of the science over the last few years, and at the resulting, wider accessibility of gene editing technology as the knowledge spreads and costs plummet. On the basis that nature has won any nature v nurture debate and that any feature of a living organism that has a genetic foundation can be genetically engineered to enhance or diminish the effect of the gene or genes, it would appear that the answer to the headline question will be yes, even if it is not happening yet. More pressing than the “could it” question will be those questions more related to ethics and values than to data and science. These include “will it?”, “by whom?”, and “to what purpose?”

How we manipulate both nature and nurture in enhancing intelligence is familiar to people who make choices of smarter mates, better schools and healthier lifestyles. In considering the viability of genetic engineering as just another choice to be made, we will be considering the same filters of benefit over risk, fairness, accessibility, and the opportunity cost to our species of not taking a significant opportunity if and when it presents itself. Would we hobble ourselves if the price were our own extinction?

Trump begs a focus on our applications of intelligence

Let’s move for a moment beyond the evolving fascinations of the worlds of brain science and artificial intelligence, the talk of transhumanism, neural implants, machine learning, the future of robots and so on. Let’s rise above the quotidian world of Harold Macmillan’s events and John Lennon’s life and ask a fundamental question: “What are the implications for the IQ of our species if Donald Trump were to be elected President of the United States?” Continue reading Trump begs a focus on our applications of intelligence