Paperclips or thumbtacks: the perils of SAI changing its mind

Paranoia and the blogosphere go together like a horse and carriage, no more so than on the subjects of nasty government or the dangers of AI. Aspiring eye-swivellers can replace that “or” with “and”, and while away many a happy morning reading up on Jade Helm. Saner people will delight in happening across a blogger such as the Swedish Olle Häggström, whose most recent post offers up a thoughtful commentary on Nick Bostrom’s Super Intelligence, providing some useful links along the way, as well as an intriguing footnote on “goal-content integrity”.

More than just a thought experiment, the challenges of AI deciding on its journey to Super Artificial Intelligence that it might change its mind is the cornerstone of anxiety over humanity’s future. Having said that, as thought experiments go, it is an existential doozy. Playing with it can set out, as Bostrom does, with speculating on an AI designed to optimise the production of paperclips, filling the universe with paperclips converted from anything that once contained carbon . . . such as people. But then instead, might that SAI conceivably change its mind and set about producing thumbtacks? Possibly not – not if it had been programmed to produce paper clips.

Goal-content integrity can get trickier when the optimisation target is conceptually fuzzier than a paperclip: for example, a human soul. Imagine our SAI setting out to maximise the number of human beings who get into Heaven, only to discover – if only for the purposes of this thought experiment – that Heaven was always itself a human construct created to optimise our good behaviour. Might the SAI have an epiphany? “Ah,” it might think: “if Heaven were just a means to an end, why not optimise the conditions that encourage the best behaviour in the most people in the first place?”

Lively Stuff from Planet BAM!

Leave a Reply

Your email address will not be published. Required fields are marked *