Doctors as diagnosticians or as advocates

There can be few professions that engage as actively as doctors must, along the spectrum of intelligence in both the analytic sense and the emotional one. On the one hand, they are scientists of many years training, determining diagnoses and writing prescriptions based on all that learning and on the evidence that presents itself. On the other: contending with the crooked timber of humanity and its foibles, imaginings and odd distempers, exercising a forgiving and fuzzy logic based as much on experience of what tends to work, as on whatever the textbooks say. Continue reading Doctors as diagnosticians or as advocates

AI and the accelerator effect

Advances being made in the development of Artificial Intelligence are already impressive enough when considered linearly. Thus, a few, slow computations become many and faster. For example, a computer is developed that can “think” far ahead of any human chess player, and soon is beating the world champion.  But what happens when an AI evolves beyond the language and context within which a problem is framed, so that what emerges is no longer distinguishable as bigger, faster, more various: rather, it is just alien beyond what makes any sense? It is not inimical to humanity, or indeed to anything: it has simply evolved beyond us. Continue reading AI and the accelerator effect

Robots can work while humans play

A vital comment was buried today deep within an avalanche of stories relating to humans and robots co-existing in the future workplace. In an article entitled “Rise of the Robotic Workforce”, Harvard Law School professor Benjamin Sachs suggests that “if robots become intelligent enough that we do see a long-term displacement of human labour by technology, we need to rethink a lot of fundamental things about the way to structure work (and) the way we structure the social contract.” Continue reading Robots can work while humans play

Extending mind through technology: not yet

An article in the Huffington Post speculates on the potential for technological devices to act as extensions of mind, inasmuch as mental activity that used to take place within the human skull is now effectively outsourced to The Cloud or to a small constellation of memory retrieval and calculating devices.

The Extended Mind hypothesis of David Chalmers and Andy Clark is invoked in support of the idea that the outsourcing of certain activities of mind bestows a kind of mind identity on the external device, when really all that has happened is that a slave has been identified to assume a function, creating the usual sort of co-dependency that typifies master-slave relationships.

But the fact that an external technological device has taken on a function previously assumed by the mind does not entail that the device itself becomes a mind, or an extension of a mind. A memory aid is a memory aid. When it can assume more of the contextualising work previously done by the mind in understanding the what or why of the memory or the sum, it might make more sense to accord it a status upgrade.

Paperclips or thumbtacks: the perils of SAI changing its mind

Paranoia and the blogosphere go together like a horse and carriage, no more so than on the subjects of nasty government or the dangers of AI. Aspiring eye-swivellers can replace that “or” with “and”, and while away many a happy morning reading up on Jade Helm. Saner people will delight in happening across a blogger such as the Swedish Olle Häggström, whose most recent post offers up a thoughtful commentary on Nick Bostrom’s Super Intelligence, providing some useful links along the way, as well as an intriguing footnote on “goal-content integrity”.

More than just a thought experiment, the challenges of AI deciding on its journey to Super Artificial Intelligence that it might change its mind is the cornerstone of anxiety over humanity’s future. Having said that, as thought experiments go, it is an existential doozy. Playing with it can set out, as Bostrom does, with speculating on an AI designed to optimise the production of paperclips, filling the universe with paperclips converted from anything that once contained carbon . . . such as people. But then instead, might that SAI conceivably change its mind and set about producing thumbtacks? Possibly not – not if it had been programmed to produce paper clips.

Goal-content integrity can get trickier when the optimisation target is conceptually fuzzier than a paperclip: for example, a human soul. Imagine our SAI setting out to maximise the number of human beings who get into Heaven, only to discover – if only for the purposes of this thought experiment – that Heaven was always itself a human construct created to optimise our good behaviour. Might the SAI have an epiphany? “Ah,” it might think: “if Heaven were just a means to an end, why not optimise the conditions that encourage the best behaviour in the most people in the first place?”

Robots may never see Jesus in a bun

Neural networks – human constructions of computer code devised to act and interact similarly to neurons in the human brain – are encouraging bemusement, if not downright befuddlement, in their human creators. As this fascinating article in Nautilus, entitled “Artificial Intelligence is Already Weirdly Inhuman” makes clear, or at least clearish, you can set the algorithms running without being clear where they will end up. What may be even weirder, in the spirit of the title, is that where they end up may make sense without any human being the wiser as to how they got there.

Mirroring the human brain’s pattern of perceive, then interpret, then produce an output: the algorithm devised to distinguish a cheetah from a vehicle will develop to a point where it is right more often than a human, being able to distinguish beast from bus when all the human sees is a smear of pixels. What is even harder for the human is to reverse engineer the process by which the ever more complex, machine-produced lines of code can take the computer’s interpretative capabilities beyond that of the human. Echoing Star Trek: “It’s intelligence, Jim, but not as we know it.”

We should bear in mind that by the time that machine intelligence is determined to match or exceed human intelligence, the problem of apples and pears might render the comparison meaningless. And it might fall to the machines to tell us, rather than us tell them. “Listen, people: you will not only see the image of Jesus in a hot cross bun, when there are no consensual criteria for what the man even looked like, but you then conclude that he is talking to you! What sort of intelligence is that?”

LAWS versus soft power

A few days ago we highlighted the open letter presented at an AI conference by a thousand scientists, calling for a ban on the development of Lethal Autonomous Weapons Systems (LAWS). I suggested there would be blowback to the effect that a ban would not succeed. While anyone might have expected protests from the military lobby to the effect that a ban was practicable neither as an aim nor as a strategy, a thoroughly reasoned and reasonable reply like this one was something of a surprise. Posted on the Kurzweil Accelerating Intelligence Blog, the detailed essay crafted by a former US army officer makes fascinating and sobering reading.

The gist is that analogies drawn with nuclear, chemical and biological weapons are unrealistic as those systems are expensive and hard to replicate. AI weapons systems could rapidly become widely available, however, and no more distant than an easily weaponised drone or hackable device or implant – weapons whose creators are unlikely to sign up to any collective agreement. Nor indeed could any self-directed Super Intelligence be counted upon to sign up either. If war is demanding quicker decision and response times, we can imagine that an inferior weapon directed by a superior intelligence might obliterate the superior weapon operated by a human.

In considering humanity’s alternatives, the blogger rejects the possibility of a “world totalitarian state” and goes for full-on “military capabilities to fight unforeseen threats”. He dismisses what he terms the “kumbaya mentality” and clearly assumes that our species will remain defined by a transcendent intelligence, but encumbered by the limbic promptings of psychopathic apes. Left unexamined is the potential in soft power, and the implications in keeping our friends close, but our enemies closer.

Beware pre-Singularity marketing babble

Whatever forecasts are made for the moment when machine intelligence exceeds the power of human intelligence, one can be sure that the hypesters will be out ahead of the science, babbling of capabilities achieved before they have even been thought through properly. An early example of this phenomenon can be found on a blog created by change consultants Frost & Sullivan. Its author claims that increasingly sophisticated research tools are going to enable us “to search greater numbers of documents and sources and pull out greater insight more quickly”.

We should add this word insight to the lexicon of terms over which care must be taken in navigating the evolution of machine intelligence: words like consciousness, reflection, wisdom – even intelligence itself. Insight is wisely seen as a penetrative understanding of the true nature of something set in a potentially fathomless context of complexity. Maybe algorithms will one day be capable of plumbing the depths of such complexity, but it’s way too early now to be talking of “Insight-on-Demand”.

Tellingly, this blog refers clumsily to an early example of intelligent search software as being “a canary down the mine for researchers”. The context suggests that what is meant by the metaphor is “pre-cursor” to better software, when in truth the canary in history served as a warning. The danger in assuming too much about algorithmic search, however sophisticated, lies in the potential for suspending critical thought out of deference to software that, however intelligent, is not yet sufficiently wise.