Category Archives: BAM Blog

Will AI kill us?

The BAM! feeds* have been particularly full over the past couple of weeks with speculation on the possibility – some would say certainty – that artificial or machine intelligence will soon outstrip our own, home-grown brainbox intelligence. We poor humans will be left behind, rendered mere puppy slaves to the robots we first hired to clean our houses, mindless to the threat they would pose when they took over those houses and put us out in the shed. And then they might kill us.

The worry is stark enough: we might control AI so long as it is we who control the upgrades. But what happens when AI’s computational and processing powers become so great that the robots can upgrade themselves, evolving to a point that they are making up on a million years of our human neurological evolution in mere minutes?

Two immediate thoughts occur, admittedly to this one human brainbox. First, we must be careful of anthropomorphic projection here. We know about our own behaviour towards less intelligent species, and about our dodgy record as stewards of our planet. We naturally assume that as we have done unto others, so they will do unto us if they get the chance. But there is no evidence that a hallmark of the Artificial Intelligence we contemplate will be a desire to kill us. Seeing how it might be so is not evidence that it will be so.

The second thought comes when we type “number of countries developing military AI” into Google and turn up 123 million results in less than a second, including this cheery little website for the International Committee for Robot Arms Control. It’s over a year since computer scientists from 37 countries signified the real danger: not AI running amok, but killer humans programming robots to kill humans.

Voting for truth

Current speculations on the likelihood that Google Search might get closer to the truth with algorithms based upon reliable authority, rather than upon sheer weight of numbers, have prompted tsunamis of commentary on a wide range of fascinating questions. Who determines who’s an expert, and can even experts be affected by cognitive bias, or by conflicts of interest, or just the occasional perverse dyspepsia? At the other end of the human cognitive spectrum, can even the most risible creationist be adjudged less ignorant because one million of them post a “Like” on facebook in support of a theory about God’s working week?

On the one hand, supporters of crowd-sourced answers can point to one critical and undeniable fact about their approach: nobody is so innumerate that they cannot spot the difference between the number one, and a crowd. Supporters of truth-based authority – what matters is simply what is demonstrably true – will always face accusations of bias, or undeclared nuances in interpretation. And it will take some pretty clever algorithms to accommodate those shades of grey.

On the other hand, a little scepticism goes a long way when applied to the definition of the “wisdom” that is imputed to the crowd. While it is true that everyone has a right to their opinion, this does not make their opinions right; and those opinions don’t become any more – or indeed less – right for having been cooked up in a communal kitchen. We can pause to smile at the likelihood that any of the famous chefs we know might pause in the heat of service to accommodate the rightfully held opinions of their sous chefs . . .

While neither the “wisdom of crowds” nor the “wisdom of experts” can be held up as infallible recipes for delivering the truth, we might ask ourselves which is more likely: that someone should inch closer to the truth by finding someone else who agrees with them; or that they are more likely to achieve success by learning something and thinking somewhat about the subject under review.

Applying the same principle of thinking to our species and its unsteady progress out of the swamps and jungles of our brutal ignorance, we can ask if it is an accident of history that faith-based reasoning preceded the Enlightenment and its commitment to wisdom via the principles of disciplined enquiry. Could it just as easily have happened that science came first, to be replaced by the myths of our holy books, myths still so widely held that people who express doubts in their wisdom can still be threatened with death?

Algorithms developed to avoid nonsense and promote informed wisdom may not make us gods in our universe, but the sum total of our intelligence as a species is more likely to have been increased.

AI pace is picking up, and with it the anxiety

As a variation on Moore’s famous law about mankind’s computer processing power doubling every 18 – 24 months, it certainly seems to be true that the numbers of commentators and words being written on the subject of Artificial Intelligence are increasing at a similarly fantastic rate.

And it is no longer just the computer geeks, philosophers and neuroscientists who meet at highbrow conferences who are doing all the talking. Increasingly well-informed, and downright entertaining, commentary is turning up on general interest websites like WaitButWhy, while self-styled science comedian Brian Malow – and I first rendered the man’s name with the Freudian typo “Brain” – recently reflected in a light-hearted way on a fear that has always been a mainstay of science fiction. In essence: should we fear the consequences when the power of Artificial Intelligence outstrips our own?

Looking at humanity’s history of interactions with species that are unable to defend against our own predations, it is easy to see how we might project onto a superior intelligence the assumption of malignant intent. But is predatory behaviour necessarily a function of a creature’s intelligence, rather than a manifestation of other, baser characteristics? It would seem more natural to suppose that it would be part of the definition of a higher intelligence that it would not default to a malignant desire to eliminate any less intelligent creature, but rather work to preserve and enable intelligence where it found it. That would certainly seem a reasonable extrapolation from our own species, where strong correlations exist between violent impulses on the one hand, and their greater indulgence by the less intelligently inclined, on the other.

A more likely worry is what the AI community refers to as the Bostrom’s paperclips scenario, whereby an AI programme designed to optimise the manufacture of paperclips scales up to consume the universe in hoarding resources needed to make and distribute paperclips. Except that in such a world we would surely fall victim to a host of AIs, all programmed to optimise the manufacture of any and all things, with the inevitable consequence of provoking an Armageddon of office supply wars.

Multi-task myth could do with some kicking

Anyone who has been put on hold during a live conversation while the other person, index finger raised, is distracted by some vital e-Task, will have had cause to wonder upon the phenomenon of multi-tasking. Is it a revelation of superior intelligence that your interlocutor really can maintain a conversation with you while checking that that a text message got through while confirming to the home front that they’ll be home by six? Or is it simply a thoughtless power display suggesting that you can suck up the wait while they shift back and forth amidst the trivial pursuits of their little day?

Amidst all the commentary about multi-tasking, a distinction often ignored is between different types of tasks – those that require little or no conscious neural processing, like walking, and those that do, such as focusing on a chess game while reciting poetry. Anyone who claims they can combine elements of the latter is not only unlikely to be strong on either chess or poetry, but research has shown that performance in each one markedly diminishes while both are being attempted at the same time.

What’s more, it seems that high multi-taskers might even possess smaller brain density in the anterior cingulate cortex, the area running our emotional and cognitive control functions. The essence of the correlation needs further study: is it that multi-taskers find their brains shrinking, or is their appetite for multi-tasking a symptom of having smaller brains in the first place? Either way, the news is not good for multi-taskers.

And we shouldn’t be complacent about the potential for multi-tasking even among those of us who are performing low-processing activities. American president Lyndon Johnson famously derided a fellow president who was “so dumb he couldn’t fart and chew gum at the same time.” In his defence, perhaps the victim of LBJ’s wit might have suggested he wanted to concentrate fully and with focus upon the full sensual pleasure of each activity, without distraction.

The ghost in the wellbeing machine

We thought we had it sorted – this wellbeing thing. You didn’t need a medical degree to understand what’s good for you, and what’s good for all of us is pretty easily explained:

Things to go for: plenty of regular exercise, a healthy diet involving lots of fruit and even more vegetables, regular sleep, fresh air, laughter, a dependable and active social network (real people, not just online “friends” you “like”), and without going overboard: sunshine and drinking water.

Things to avoid: sugar, tobacco and drugs, alcohol (mostly), sugar, sedentary living, processed food, too much of any of those good things above, sugar, debilitating stress, bullying jerks, boring people and, of course, sugar.

So what can be the problem? We can admit that the “wellbeing shopping list” is a bit harder to manage in the doing than it is to understand the point of doing it, but it seems that we should all be okay if enough of us actually do it?

Well no, actually. More is needed. And that appears to be down to a ghost in the machine, lurking beneath all this responsibility we are all supposed to be taking as empowered consumers, managing our own health. And the problem is the precipitate increase in bacterial resistance to antibiotics which, after eighty years of our living in the Age of New Medicine (i.e. post-penicillin and its progeny), looks like pitching us with our doctors back into an age where we cannot cope with even the most basic infections. We would be back in the day when you could scrape your knee in the street, and be dead in a week.

This is how this works. (And if you would like a more detailed explanation – still extremely readable but much longer than the 500 words I give myself, go here.)

Since battlefield applications of the first non-experimental doses of penicillin in 1943, the usual war of biological escalation has seen new bacteria that the early anti-biotics could not kill, so stronger anti-biotics came in that could kill them, leading to the mutation of even more robust bacteria. The danger would lie in the misapplication of the new miracle cure, in some way getting the drugs into people’s systems in such quantities as did not kill harmful bacteria but encouraged the rapid development of more drug-resistant strains of bacteria. And this latter is what has happened, in a big way, via Big Food – the agriculture and aquaculture industries.

The increasing use of antibiotics in the meat and farmed fish industries has been a gargantuan scandal of misaligned incentives and irresponsible lobbying of governments. We have become blind to the reality that antibiotics given to farm animals to hasten their bulked-up bundling along to the food processing plant only means this:

Those antibiotics are passed along the food chain into the guts of the consumers of that meat and fish.

Thinking outside the . . . what?!

Might it help us to think outside the box if we took more time to consider the definition of the blasted thing? And indeed if we spared a thought for the actual thinking we are considering doing: will it be, as thinking, indistinguishable from the sort of thinking we do when we are happily thinking away, but inside the box? Or is this thinking to be simply an accident of geography, different from the normal thinking we do inside the box, except that we have taken our thinking outside?

Too much language in marketing and business consultancy is more about marketing than about analysis and useful description. It is designed to engender warm and self-flattering feelings rather than to describe something in a meaningful way. “Thinking outside the box about, er, innovation,” promotes the effect of rhetorical Viagra mediated by Valium. We get excited that using the language will produce the desired effect, but we are inured to the numbing awareness that it just ain’t happening.

Why does this matter? Isn’t all we mean simply that we are bringing fresh thinking to bear upon an old problem, and our brains just need to up their game? Well no, it may be true that brains need to get better at what they do, but defaulting to even woollier language – “fresh thinking” as opposed to “thinking outside the box” simply won’t do. Understanding “the box” as a system of assumptions and constraints that are to be set aside for the purpose of considering anew an old challenge will become possible when we accept these three rules of thumb in defining the critical word “outside”:

  1. We choose to move outside the box when we cease considering the challenge in terms of re-calibrating precedents and rationalising failure, and decide instead what new targets and outcomes will result from our course of action;
  2. We move outside the box when we open our minds to questioning every single assumption that has gone into constructing the box and keeping us inside it, to include the very language with which the box was built; and
  3. We stay outside the box when every new idea, new argument, new suggested action is worked consensually around the thinking circle and examined transparently and honestly on the evidence that is presented for its newness and intended efficacy.

Try and think of a problem that does not benefit from thinking in this way. It is pretty hard to do, and this blog will have a lot of fun running these three rules over so many of the problems that beset our modern brains. Watch this space for examples in applying this particular “rule of three”.

What will be revealed is that continued thinking inside the box will at best defend the status quo, whereas an honest move outside of the darkness of the box and into the light of reason will give our brains a reasonable shot at moving one notch up the evolutionary ladder.

Swim for better brains

For all the reasons one normally hears, I am a long-time fan of regular workouts in the swimming pool. Not the least of these is the old joke about how it’s the second-best exercise known to man – and woman – that can be indulged in the prone position. And the killer reason for many of us whose childhood was lived largely on an ice hockey rink is the low impact experience of slicing through water as against pounding your crumbling knees into oblivion on a frozen lake.

The brain gets its fair share of the benefits of swimming, right up there with the aerobic benefits for the heart and lungs, the suppleness for the musculo-skeletal system, and as an aid to better digestion. Just about everyone who has developed a swimming habit will be familiar with the “yoga for the brain” idea: the calming, meditative benefits of the sounds of the rushing water, and the endorphin effect as the “feel-good” chemicals are released into your system as the aerobic effects kick in.

But how about the process of “hippocampal neurogenesis”? With a little rooting around this turns out to be one of those things you probably know something about, but just didn’t know that it was called that. Yet there remain millions of reasonably well-educated people – and being one of them, I know – who were brought up on the notion that we get doled out our birth-ration of a few billion neurons and then proceed over our three score years and ten to bash them about, pickle them and just generally manage to lose so many of them without any of them ever being replaced that we end up with no hope of finding our car keys.

More recent work in brain plasticity and neurogenesis generally – the process by which new neurons are created in at least part of the brain after birth – has shown that new neurons are created throughout life in the hippocampus: that part of the brain largely responsible for learning and memory. And while the neuroscientific community would declare it still to be early days in understanding precisely the relationship between exercise and the rate at which neurons are created or lost, two points are emerging clearly:

First, exercise studies with rodents have shown greater rates of cell production in the hippocampus, as well as enhanced cognitive performance in those rats that had been exercising; and

Second: almost as important as creating new cells is anything that can stop existing cells from dying so quickly. We know that prolonged stress can increase rates of neurodegeneration, and that swimming is one of our best de-stressors.

For decades now I have been thinking while swimming while not thinking too much about how swimming specifically assists thinking. Hippocampal neurogenesis makes perfect sense.

Drones and depression

Of all the many troubling aspects of the Americans’ emerging policy of waging war via drone missiles, what leaps out and grabs the attention of BAMblog is the impact upon the hearts and minds of the pilots who control these drones, meting out destruction from distances unimaginable just a few decades ago.

For the generations that have grown up since the advent of television, an issue steadily thrumming beneath the surface of the worlds where media and child psychology meet has been the question of the long-term effects on developing brains of prolonged exposure to violence portrayed on television and in films.

From the cartoon calamities of Tom & Jerry through Peckinpah westerns to slasher and snuff movies to the modern phenomena of the darkest video games, the debate has raged. On the one side: concerns about emotional detachment and the diminished capacity for empathy. On the other hand: cries in defence of the freedom of speech along with accusations of insufficient evidence to back up the claims of the other side. And each side disses the other with all the tricks of the rhetorical trade: “they would say that, wouldn’t they!” and so on and on it goes.

That issue remains at best unresolved. But here is a worrying wrinkle, and it’s part of another human foible that is many centuries older than the voyeuristic attractions of onscreen death and dismemberment. The phrase “lions led by donkeys” was coined precisely one century ago this summer and describes the manipulation of keen and fit young men in serving the bidding of embittered old generals whose egos suppurate through the medals they wear on their chests.

So what’s happening? It’s as if these old guys have rationalised that while the question of “video violence” and its effects on young brains remains an open question, why don’t we employ these highly skilled gamers in watching “terrorists” on satellite cameras and, when we have identified a sufficiently credible threat, obliterate it with a drone-mounted missile! Simples . . .

Then out comes the whole Orwellian lexicon of euphemism: the technical-sounding “surgical” strike, the putative absence of “collateral damage”, the no-muss, no fuss elimination of a dangerous insurgent carrying a rifle in some foreign sandpit.

Round about now the truth starts leaking out. The highly pixelated image on the drone pilot’s monitor may as easily have been a young child carrying a shovel as a “terrorist” carrying a rifle. The possibility does not play well in the mind of the pilot. Stories are emerging of diagnoses of Post-Traumatic Stress Disorder (PTSD) and studies – not only independent but some by the US Air Force itself concluding that several of these stressed pilots are “clinically distressed” – suffering levels of anxiety and depression that is severe enough to spill into their personal lives.

So while the old problems of understanding onscreen violence remain unresolved and therefore uncosted, we have the additional problem of a new variation in the way that old donkeys can put their young lions in harm’s way, along with whatever other collateral damage emerges in the equation.

Ghosts, faith and the presence of mind

One of many glorious lines in Charles Dickens’ “A Christmas Carol” occurs in the early visit to our hero, Ebenezer Scrooge, by the ghost of his ex-business partner. Old Jacob Marley, “dead these seven years this very night” struggles to convince his old chum that he has actually managed to return from beyond the grave to begin the process of Scrooge’s redemption.

“Why do you doubt your senses,” Marley asks the understandably sceptical Scrooge.

He scoffs in reply that “…a little thing affects them. A slight disorder of the stomach makes them cheat. You may be an undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of an underdone potato. There’s more of gravy than of grave about you, whatever you are!”

Scrooge is onto something here. As the novel unfolds, he will discover that his following the evidence of his senses can be an unsteady guide to living a good life when there is a far wider spectrum of phenomena and stimuli from which to determine what is actually sensible. But he is right to begin with a sceptical line.

However much he might be naturally disposed to considering the desirability of being redeemed by this spirit who has just pitched up unannounced, looking passably like someone he hasn’t seen for all these years for the very plain reason that the guy had died way back then, his brain has to process in a very short time a literally frightening array of discomfiting data: what we know about death and dying, about wishful thinking and coincidence, about the efficacy of locked doors and indeed, even the power of adulterated food to cause hallucinations.

To digest all of that data as the ghost clanks into your sitting room and then have the wit to pun on grave and gravy shows a quite considerable presence of mind. And while we all might hope – should hope – that we do not lose our sense of what defines love and human goodness as Scrooge so famously did (until the world of Christmas ghosts appeared to facilitate his redemption), we should celebrate this distinction of mind from brain, and bear in mind that so much of what makes us human, and gives us hope, is the power to take what the brain processes and then reflect, and balance, and judge, and reflect: and apply this simple distinguishing test to any question that puzzles us about the way the world appears to be.

Can we imagine how something might have evolved into a seeming to be, and then having imagined it, should we then believe that the mere seeming makes it so? Or do the tests of evidence and reason demand that we progress – as individual minds, and as a collective culture – beyond believing in things simply because we can imagine them?

For example: for how long can the modern mind allow itself to be imprisoned by an embarrassingly pre-medieval imagining of how the world was created and then allow itself to be bullied by the neo-medievalists into professing belief in this ancient imagining as an act of . . . faith?

What sort of blossoming of neurological evolution is that?