Tag Archives: Mind

Intelligence does not grow in a petri dish

As there are neither agreed rules nor a generally accepted definition of intelligence, nor is there a consensus on what consequences of human or machine behaviour betokens intelligence that is either natural or artificial, it will be difficult to measure the surpassing of any point of singularity when machine intelligence matches and exceeds our own.

It may well prove to be the case that, when we think we have got there, we will have supreme exemplars on both sides of the bio/non-biological intelligence divide asking us if it any longer matters. And as our species approaches the moment of truth that may never obviously arrive, there will be a growing chorus of voices worrying if a bigger question than the definition of intelligence is the definition of the good human, when so much of what we might see as intelligence in its natural state is perverted in the course of action by the festering agency of the seven deadly sins, animated by fear and enabled by ignorance.

Given the wide range of environments within which intelligence can reveal itself, and the vast spectrum of actions and behaviours that emerge within those environments, it may be the very definition of the fool’s errand to attempt an anywhere anytime definition of intelligence itself. We can learn only so much by laboratory-based comparisons of brains and computers, for example, balancing physiological correlations in the one with mechanistic causations in the other.

Glimmerings of clarity emerge only when one intelligent agent is pitched against another in a task-oriented setting, the victory of either one being equated with some sense of intelligence superiority when all that has happened is that an explicit task orientation is better addressed when the parameters of the task can be articulated.

What appears to distinguish human intelligence in the evolutionary sense is the capability to adapt not only in the face of threats and existential fear, but in anticipation of imagined projections of all manner of dangers and terrors. We hone our intelligence in facing down multiple threats; we achieve wisdom by facing down the fear that comes with being human.

Fear is not innate to the machine but it is to us, as Franklin D Roosevelt understood. However machines progress to any singularity, humanity’s best bet lies in understanding how the conquering of fear will enhance our intelligence and our adaptive capabilities to evolve through the singularity, and beyond.

After the election: recovering from cancer and “the other stuff”

Following a newspaper feature by conservative commentator George Will, speculating that Donald Trump might serve in history as the American Republican Party’s “chemotherapy”, an oncologist writing in the online magazine Salon reminded his readers that people with experience of the various treatments available for cancer will recognise that chemotherapy is almost invariably a pretty tough gig.

The idea certainly provides for a thought-provoking metaphor, however, not least as chemo seldom does much good for the cancer’s host, while along the way it ravages the body and soul of the patient every bit as much, and sometimes more so, than it tackles the cancer itself. But perhaps the point of the metaphor is to erect the credible claim that the aftermath of the election, given something like an even modestly clear win for Clinton, will enable the GOP to survive and carry on with the bromide that it was Trump’s noxious temperament that lost it for them: but the policies themselves were sound.

This would be a false premise. Not only are the GOP’s policies, broadly considered, not sound, but they have consolidated their appeal over several decades among a now noticeably declining voter demographic largely comprising of angry and less well-educated white males. But it is not in fact the policies (on either side) that have defined the leit motif of this election as much as the degenerate, juvenile, and poisonous atmosphere that has evolved around any articulation or community discussion of those policies.

Messages have been subverted by the increasing puerility of the mediums, or media, and people have quite simply and very largely been repelled by the whole stinking and thoroughly demeaning process.

In looking beyond the result of the election to its aftermath – potential armed unrest, possible litigation from losing candidates, lingering and truly cancerous rancour eating into the body politic of Washington culture for many years to come – it is useful to stick with the chemotherapy metaphor in examining several key themes that might have attracted the media’s attention over the course of a horribly protracted campaign, but did not. For the fact remains that whatever causes a cancer exists independently of any therapy; the cancer can metastasise; and while the quality of life can become increasingly uncertain as life carries on, the patient is still obliged to “get on with other stuff” as the cancer and/or the treatment progresses.

If today’s news is telling us that “one fifth of cancer patients (in the UK) face workplace discrimination”, what will tomorrow’s news tell us about the future prospects of the American body politic given the enervating drain upon its vitality by two years of a news cycle dominated by one deeply flawed, if not downright tumorous, human being? What is the “other stuff” that the world will have to be getting on with while it deals with the aftermath of the 2016 American election: addressing not only the tumour but the conditions that caused it and the likelihood of any metastasis?

The University of Cambridge’s Centre for the Study of Existential Risk (CSER) has a page on its website devoted to what it sees as the major risks to humanity, defined as such by their threat to the future of the human species and categorised in four broad groupings. Only one of these categories might with some generosity be seen as having been addressed over the course of the election campaign.

It would still be something of a stretch of the imagination to articulate how Donald Trump established any policy position on “Systemic risks and fragile networks”, which CSER defines as the tensions emerging between growing populations, pressures on resources and supply chains, and the technologies that are arising to address the challenges of a global eco-system increasingly defined by its interconnectedness. Trump would point out the systemic shortcomings of American trade negotiators historically unburdened by his vision and experience. As the candidate who actually possesses knowledge and experience of the nuances in balancing risk and reward in this area, Hillary Clinton at least had her views constantly at the ready whenever the media tired of asking her about her emails.

Heading the CSER list of existential risks and often cited by scientists, futurists and politicians as the greatest risk now afflicting the planet is what CSER terms “Extreme risk and the global environment” – known colloquially as climate change. Whatever the consensus among people who actually know what they are talking about, a significant proportion of the broader American public is disinclined to recognise that this problem even exists. The tools of evidence and critical thinking being largely Greek to this wider population, the American media clearly felt the whole subject to be too recondite to be engaging with the science-deniers in a language they couldn’t understand. Trump certainly couldn’t, and the media largely gave him a pass on this.

Most remarkably, the other two categories of risk on the CSER website were virtual no-go areas for both presidential campaigns and the media whose task it might have been to interrogate them if only they had the slightest inkling of the exponential pace of change that will define humanity’s progress in the coming years of the 45th president’s term of office. At some stretch, consideration of the “Risks from biology: natural and engineered” might be seen to feature in the work of Planned Parenthood and its vital work in the areas of female public health and human reproduction. But here Trump was in thrall to the fruitcake wing of the Republican party and, as this was one of the few areas in which candidate and party were in lockstep agreement, he was happy to blunder into embarrassing policy positions that were consistently and constantly undermined by Clinton’s expertise, her experience, her commitment to the cause and, finally, to simple and understandable gender solidarity.

Given the gap between the candidates on issues of female biology – not to mention the publicity given to Trump’s history of obsession with female sexuality stopping well short of the time that reproduction becomes an issue – this was possibly the area of policy discussion that has left the progressive media nonplussed that this election could ever have been run so close. In any case, the wider issues of existential risk and benefits relating to genetics, synthetic biology, global pandemics, and antibiotic resistance scarcely got a look-in over the course of the election’s somewhat onanistic “news cycle”.

Most tellingly, Artificial Intelligence hasn’t featured very much at all in this election. This is especially alarming given the final summary sentence on the CSER website section that addresses this particular area of risk: “With the level of power, autonomy, and generality of AI expected to increase in coming years and decades, forward planning and research to avoid unexpected catastrophic consequences is essential.” The silence has been deafening.

For all the speculation about what so-called Super Artificial Intelligence may mean some decades hence at the point of the “Singularity” — the thought-experimented point where machine/computer intelligence matches and then exponentially speeds past the capabilities of human intelligence — the real story now, in 2016, is almost as startling as it is inspiring.

In this year when “human” intelligence is grappling feverishly with a presidential choice between one candidate who has been careless with her email and another who is a self-professed sex pest and the most dangerous sort of conman (simultaneously large on attitude but bereft of a clue), this year alone has seen considerable advances in the capabilities of Artificial Intelligence, both for worse and for better.

The downsides include the possible misuse of private and commercial data, the increasing potential for fraud, and the threat of AI-directed/autonomous weapons systems. The upsides include faster and more efficient medical research, advances in virtual and augmented reality, safer cities through self-driving vehicles and infinitely more detailed intelligence-gathering on the workings of biology, chemistry, physics, and cosmology. In short, the wider universe is opening before our wondering eyes.

What is worrying amidst this quickening pace of AI technology is that the sort of circumspection we see articulated in media articles like this recent piece in TechCrunch is, first of all, not being reflected in wider public discussions incited by the American election. Second, there is no evidence that more frequent calls for ethical reflection on the challenges of AI might see progress in the ethical sphere keeping pace with developments in the AI science. This prompts at least three pretty obvious questions.

On the longer time horizon, as we contemplate a possible Singularity, what do we imagine that an emerging and self-conscious SuperAI might make of its human progenitor? If we have filled the intervening decades with steadfast ignoring of our existential threats, ever complacent about the real and enduring achievements of human imagination, and yet determined to elect our future leaders according to the bottom-feeding precedents suppurating forth from this week’s debasement of democracy, could any intelligence – human or “artificial” – be surprised if the post-Singularity machine should decide that man and monkey might share a cage?

In the medium-term, we might galvanise an appropriate response to the above paragraph by imagining what progress we might make over the next four years, given what has happened just over the course of 2016. Will the wise heads of 2020 be looking at that year’s American election in prospect and wondering how much more deliberation will be inspired by the questions so woefully ignored this year?

Specifically, will humanity have come to grips with the technological and ethical issues associated with the increasing pace of AI development, and craft their appreciation of that year’s slate of candidates on the basis of more intelligent policy positions on support for technology, for education in the sciences and in the absolute necessity for our species to evolve beyond its biases and primal fears in the application of critical thinking and greater circumspection as we prise open a deeper understanding of our relationship with the cosmos we look set to join?

Which brings us to the short-term question: if we are to attain the promontories of wisdom implicit in addressing those challenges of the medium term, what do we have to start doing next week, next month, and throughout 2017? If we are to overcome the toxic and cancerous experiences of 2016, what are the fundamentals among “the other stuff” that we will need to address? What must we do to ensure that 2020 finds us clear-sighted and focused on the distant Singularity as a point of possibly quantum enhancement of human potential, rather than a humiliating relegation to the monkey cage?

By no means a comprehensive wish-list, or even sufficient in themselves for guaranteeing the progress of our species to that quantum gate, these twin areas of focus are proposed as at least being necessary areas for reflection given the impact of their collective absence over these last unnecessarily anxious and ghastly 18 months.

First, keep it real: celebrate intelligence. We must not surrender to the pornography of simulation. Cyberspace has echoed with the cries of commentators decrying the ascendance of reality television over the dynamics of real life. The CBS CEO who admitted that the Trump campaign may not be good for America, but is “damn good for CBS” might prompt a useful debate on what the media are for. And he would not say it if people were not watching him, so another debate is necessary on how to encourage people to keep up with scientific progress as much as they keep up with the Kardashians. We need more public respect for science and for the primacy of evidence; and less indulgence of bias and political determinations driven by faith.

And as a sub-text to the promotion of intelligence, the organisers of presidential campaigns might reflect upon their role as custodians of the democratic process when they consider how best, and for how long, the 2020 campaign might proceed. Is an adversarial and protracted bear-baiting marathon an optimal way of revealing the candidates’ strengths and educating the public, or is it okay that it’s deemed to be damn good for the boss of CBS?

Finally, the American Republican Party is in need of a re-boot. To finish where we set out with a thought for what might be good for what ails it if trumped up chemotherapy should fail: are they clear on their voter demographic’s direction of travel for the next four years, given what’s going on in the world? This same question applies to any government that would profit from enduring xenophobia or from exploiting atavistic bias and resolute ignorance. There is only so much to be gained by gerrymandering and pandering to inchoate fears, and no credit at all in impugning any authority to which the cynical seeks election.

And there is absolutely no glory in taking countries back, or “making them great again”. Humanity reaches out, it moves forward, and looks up.

Ha ha bloody ha, AI is getting into scary

A feature on Motherboard (and available from quite a few sites on this particularly frightening day in the calendar) informs readers that “MIT is teaching AI to Scare Us”. Well that’s just great. Anyone insufficiently nervous anyway about the potential perils of AI itself, or not already rendered catatonic in anxiety over the conniptions of the American election, can go onto the specially confected Nightmare Machine website and consult a specially prepared timeline that advances from the Celtic stirrings of Hallowe’en two millennia ago to this very year in which AI-boosted “hell itself breathes out contagion to the world.”

The highlight – or murky darkfest – feature of the site is the interactive gallery of faces and places, concocted and continually refined by algorithms seeking to define the essence of scary. So much of what we sense about horror is rather like our sense of what it is that makes humour funny: it is less induced from core principles but is rather deduced from whatever succeeds in eliciting the scream of terror or laughter. It cannot be a surprise, therefore, that the Nightmare Machine mission is proving perfect for machine learning to get its artificial fangs into. Website visitors rank the images for scariness and, the theory goes, the images get scarier.

Another school of thought, reflected in articles like this piece in Salon on the creepy clown phenomenon, sees the fright not so much in what others find frightening as in what serves as a projection of our own internal terrors. The clowns and gargoyles that stalk the political landscape are to a large extent projections of ourselves: of our own deepest fears for the more empathetic among us, or as simple avenging avatars for the morally bereft or culturally dispossessed.

When AI moves beyond its current picture recognition capabilities into deeper areas of understanding our own inner fears and darkest thoughts, the ultimate fright will no longer lie in some collection of pixels. It will seep from the knowing look you get from your android doppelganger — to all intents and purposes you to the very life — as your watching friend asks, “Which you is actually you?” Your friend doesn’t know, but you know, and of course it knows . . . and it knows that you know that it knows . . .

How does consciousness evaluate itself?

If “writing about music is like dancing about architecture”, perhaps the attempt to reflect conclusively on consciousness is like the old picture of Baron Munchausen trying to pull himself out of a swamp by his own pigtail. Despite the usual carpings in the commentary whenever any serious thinking is done online (gosh, if only the author had consulted with me first . . .) an article in Aeon Magazine by cognitive robotics professor Murray Shanahan of London’s Imperial College makes some important distinctions between human consciousness and what he terms “conscious exotica”. The key question he poses is summed up in the sub-headline: “From algorithms to aliens, could humans ever understand minds that are radically unlike our own?”

It’s a great question, even without wondering how much more difficult such an understanding must be when it eludes most of us even in understanding minds very much like our own. Shanahan sets out from a premising definition of intelligence as what it is that “measures an agent’s general ability to achieve goals in a wide range of environments”, from which we can infer a definition of consciousness as what it is when the measuring agent is the agent herself.

From there, Shanahan works up a spectrum of consciousness ranging from awareness through self-awareness, to empathy for other people and on to integrated cognition, wondering along the way if the displayed symptoms of consciousness might disguise distinctions in the internal experience of consciousness between biological and non-biological beings. The jury will remain out on the latter until Super AI is upon us, but reflections on the evolution of biological consciousness prompt another thought on the process of evolution itself.

There is nothing absolute about human consciousness. We are where we are now: our ancient ancestors might have gawped uncomprehendingly at the messages in White Rabbits, Lucy in the Sky with Diamonds and the rest of them, but the doors that were opened by the 60s counterculture were less about means than about ends. Enhanced consciousness was shown to be possible if not downright mind-blowing. We in our time can only gawp in wondrous anticipation of what future consciousness may tell us about all manner of things, including even and possibly especially dances about architecture.

“Teach me half the gladness that thy brain must know, Such harmonious madness From my lips would flow The world should listen then, as I am listening now.” — Shelley”s To a Skylark

Algorithms cannot really know you if you don’t

Count the number of times you notice stories about how the various engines of AI – the algorithms, the machine learning software burrowing ever deeper into the foibles of human behaviour – are getting “to know you better than you know yourself.” What started as variations on “You liked pepperoni pizza, here’s some more” evolved quickly into “People into pizza also love our chocolate ice cream” and on to special deals for discerning consumers of jazz, discounted National Trust membership, and deals on car insurance.

Emboldened as the human animal always is by early success, it was bound to be a small leap for the algo authors to progress beyond bemusing purchasing correlations to more immodest claims. Hence the boast of the data scientist cited in the online journal Real Life, claiming that the consumer is now being captured “as a location in taste space”.

Advocates for algorithms will smile with the rest of us at the anecdote about how the writer’s single imprudent purchase of a graphic novel inspired Amazon into morphing from the whimsy of a one-night stand into the sweating nightmarish stalker from hell; of course, they will claim that algorithms will only get better in inferring desires from behaviours, however seemingly complex.

The writer makes a very good case for doubting this, however, going into some detail on how the various dark promptings of his delight in Soviet cinema of the 1970s are unlikely to excite an algorithmic odyssey to the comedic treatments of pathological sadness in the original Bob Newhart Show.

And yet: the witty sadsack being as likely to emerge in Manhattan as in Moscow, it is not inconceivable that algorithms might evolve to a point of sifting through the frilly flotsams and whimsical whatevers of daily life in multiple dimensions of time and space, to home in on the essentially miserable git who is Everyman. But this is to assume some consistency of purpose to miserable gitness (indeed any manifestations of human behaviour), reckoning that there is no git so miserable that he ceases to know his own mind. And here, Aeon Magazine weighs in to good effect.

There are so many layerings of self and sub-self that implicit bias runs amok even when we run our self-perceptions through the interpretive filters of constantly morphing wishful thinking. So know yourself? It may be that only The Shadow Knows.

Consciousness is not as hard as consensus

Any review of the current writings on consciousness turns up the idea, sooner or later, that it is no longer the hard problem that philosopher David Chalmers labelled it two decades ago. There has been a phenomenal amount of work done on it since, much of it by people who seem pretty clear, in fact, on what it means to them. What is clearly much harder is getting them to agree with one another.

One of the greater controversies of this year was occasioned by psychologist Robert Epstein, declaring in an article in aeon magazine entitled The Empty Brain that brains do not in fact “process” information, and the metaphor of brain as computer is bunk. He starts with an interesting premise, reviewing how throughout history human intelligence has been described in terms of the prevailing belief system or technology of the day.

He starts with God’s alleged infusion of spirit into human clay and progresses through the “humours” implicit in hydraulic engineering to the mechanisms of early clocks and gears to the revolutions in chemistry and electricity that finally gave way to the computer age. And now we talk of “downloading” brains? Epstein isn’t having it.

Within days of his article’s appearance came a ferocious blowback, exemplified by the self-confessed “rage” of software developer and neurobiologist Sergio Graziosi’s response, tellingly entitled “Robert Epstein’s Empty Essay”. The earlier failures of worldly metaphor should not entail that computer metaphors are similarly wrong-headed, and Graziosi provides a detailed and comprehensive review of the very real similarities, his anger only sometimes edging through. He also provides some useful links to other responses to Epstein’s piece, for people interested in sharing his rage.

For all this, the inherent weakness in metaphor remains: what is like, illuminates; but the dissimilarities obscure. The brain’s representations of the external world are not designed for accuracy, but are evolved for their hosts’ survival. Making this very point, in a more measured contribution to the debate, is neuroscientist Michael Graziano writing in The Atlantic. “A New Theory Explains How Consciousness Evolved”, tackles the not-so-hard problem from the perspective of evolutionary biology.

He describes how his Attention Schema Theory would be evolution’s answer to the surfeit of information flowing into the brain – too much to be, he says, “processed”. Perhaps, in the context of the Epstein/Graziosi dispute, he might better have said “assimilated”. Otherwise, full marks and thanks to Graziano.

Creationism not just a what problem: but how

Controversy arising from the recent opening of the “Ark Encounter” in Kentucky (promising something “Bigger Than Imagination”) has excited dismay among the international scientific community, echoing everywhere from New Scientist to National Public Radio. These ancient creation myths have so firmly passed through the evidence mincer into the file marked “Of Anthropological Interest Only” as to be beyond the need for refutation here, as has the pin-head dance of prevarication over what is to be taken literally, and what metaphorically: after all, who has the authority to determine which?

Two questions of greater significance involve the moral and the epistemological considerations of the Ark fable. The first is particularly resonant as it touches upon the upper hand that religion feels it holds over atheism: where would humanity go for its ethics if it didn’t have religion to show it the way? On the evidence of this particular fable, the moral of the story appears to be that you must die by drowning if you are not privileged to be Noah or a member of his family, or one of only two animals from each species which can leg it to the boat before the waters close in.

The second question may be more significant, at least cognitively: how do we know what we know? How did we come by that knowledge, then question it, revise it, and enhance it? Here is where humanity is so badly let down by its putatively holy books, in which no explanation is too risible for unquestioning belief to be compelled upon pain of eternal damnation, whatever evidence may emerge for more plausible explanations.

It is as if some ancient map had been handed down to posterity as an unerring guide to the world in all its flat majesty, promising that between the verdant coastline and the perils of the world’s edge, over which the oceans spill off and out into space, lie vast depths where there be dragons bigger than imagination. Onward marches history and the science of cartography, and slowly we come by better maps that reveal the world more closely adhering to the reality in which no traveller need ever fear slipping over the edge of the world, or being devoured by non-existent dragons.

But any insistence upon adhering to a belief in the old map will make landfall a much less likely possibility too, no matter how big the boat.

Thinking about thinking, and multi-coloured hats

A sudden immersion in the world of end-of-term report cards brought me face to face last week with a note of my grand-daughter’s ability to “empathise with characters using red hat thinking”. Ignoring my own pedantic grimace at the syntactical implication that the application of “red hat thinking” might be a curriculum objective, I passed quickly through the bemusement that the thoughts of Edward de Bono have so passed into the vernacular of North London primary schools that they are referenced in lower case, and moved on to engage in a little blue-sky thinking of my own. How would a Super Artificial Intelligence (SAI) look wearing de Bono’s Six Thinking Hats?

A flick through the colour descriptions suggests that the thinking being considered has little to do with neurological processes or activities of mind, and is employed more in the colloquial sense of applying “new ways of thinking” that are lateral, or outside the box, or indeed even blue-sky. They betoken attitudes and at best establish distinctions that may be useful in achieving a result, getting the sale, appreciating an alternative point of view, or changing a mind.

So, in summary, we focus on the facts (white), think positive (yellow), pick holes (black), empathise (red), blue-sky (green?) and then consolidate the process (blue). While the metaphor gets a bit unwieldy towards the end – perhaps the blue sky should really be blue, and the process of fertilising and growing the end result should be green – It still leaves the question to play with: what would SAI do with these hats? After all, is it reasonable to suppose that if human intelligence evolved through a consciousness that manifested these attitudes, a machine intelligence might evolve in a similar way? And if it did, how would it get on with all this headgear?

Bearing in mind the extent to which AI starts from a programmed foundation for which no hats are required, in any sense, and evolves into SAI through an emerging capacity to enhance its own potency, it’s hard to see how any of these hats will matter except insofar as a programmed requirement to get along with humans is retained. The binary distinction of white and black would probably keep those hats in play. But in the link above, we note that the black hat (Judgement) is described as “probably the most powerful and useful . . . but a problem if overused.”

Bear necessities of augmented intelligence

A favourite joke involves two barefoot guys in the forest who spot a bear off in the distance, but lumbering towards them and closing fast. They have to run for it but one stops first to put on his running shoes. The other laughs and asks if the running shoes are going to enable him to outrun the bear. Finishing his lacing up, the first guys smiles and says: “I don’t need to outrun the bear; I just need to outrun you.”

As to the anticipated challenges of matching the powers of Artificial Intelligence when it supersedes the capacities of what humans can do – when it graduates to “Super” status and leaves humanity floundering in its dust: we may take a lesson from the bear joke and wonder if there is a middle way.

It appears from all the media commentary that we have little idea now as to how to design SAI that will not obliterate us, whether by accident, or through its own malign design, or by some combination possibly exacerbated by the purposeful interventions of thoroughly malign humans. Can we at least get smart enough to do what we cannot yet do, and find a way of programming SAI to help us solve this?

Without getting into the philosophical difficulties of bootstrapping something like intelligence, two things are clear. We must get smarter than we are if we are to solve this particular problem; and brains take a long time to evolve on their own. We need an accelerant strategy, and it will take more than brain-training. Research must proceed more quickly in brain-computer interfaces, nano-biotechnology and neuropharmacology, and in the sciences of gene editing and  deep-brain stimulation. While research into several of these technologies has been driven by cognitive impairments such as movement disorders and the treatment of depression, their capabilities in areas of potential enhancement of cognitive function are attracting greater interest from the scientific and investment communities. It is definitely becoming a bear market.

Beware AI that can exploit cognitive bias

At the moment of “Singularity” when the collective cognitive capability of AI matches, exceeds and then rapidly surpasses that of humanity itself, it may be by then too late to worry about what has been forecast as our surrender of the keys to the universe. Looking at what we have done as a species in seizing and exploiting control of the planet based upon our superior intelligence – and possibly fuelled by something of a guilty conscience – we can all too easily imagine what might occur if some other intelligent phenomenon were to emerge as not only our superior but, with an exponential capability to expand and deepen its capabilities, able to leave us in its dust. What, we might imagine, would we do with such power?

A couple of stories popped up on the social and political commentary site Salon this week, painting a pretty stark picture of the havoc that can be played with the abuse of information and communications technology in the world as it is today, without worrying about what tomorrow’s machines might get up to. It appears that people running today’s machines are wreaking worrying enough chaos as it is. “Could Google results change an election?” asks one, playing on a television fictional treatment of the political manipulation of search engine results to steal an election. On the same day, another Salon piece by the excellent Patrick L Smith “pulls back the curtain on how (American) foreign policy is created – and sold to willing media dupes.”

Human psychology has long been a staple of marketing, behavioural, and political manipulation and we are learning more and more about cognitive biases and their impact on our notions of free will and identity. Have a look at this article, and this one: their cumulative effect being that we must be wary of the idea of our brains as the rational director-generals of our waking selves. Then along comes AI, with its interests in creating algorithms that can subtly direct us in our searches for products, services, candidates and causes: and we don’t have to await the Singularity before we start worrying about the downside of a hybrid human/machine intelligence.