Category Archives: BAM Blog

Thursday’s Guest Blog: John Bailey on Books


The speed of developments in neuroscience could mean that anything on the subject published more than a couple of years ago is bound to be behind the curve. But patently this is not the case so far as The Brain Supremacy is concerned. Author Kathleen Taylor, who is affiliated to the Department of Physiology, Anatomy and Genetics at Oxford University, has built into her book elementary as well as far-reaching factors about neuroscience that are fundamental to an understanding of this emerging science.

This makes The Brain Supremacy, published by OUP in 2012, a reference source of basic knowledge on the subject and an informed guide to future innovation. She tackles head-on the problems as well as the advantages that developments in neuroscience will have on people’s thinking, feelings and even their existence. Not unexpectedly morality is integral to her thinking as she assesses technologies that could change our view of the world around us; that could even alter our perceptions of good and evil. These and other difficult topics that include telepathy and epigenetics in brain development, are covered expertly but in a highly readable form.

Not far from all the provisos that relate to the future of neuroscience, we are reminded that the amount we have learned about the brain and its working is far outweighed by what we still do not know. This includes just how closely bound are our brains to our immune systems, hormones and other bodily functions.

Such mysteries are guaranteed to keep us closely involved, at least until we reach the section on future developments. After thoroughly exploring how the brain has reached its status of supremacy, this section is the most intriguing because it is a guide to how we might come to terms with a new understanding of ourselves. However, the warning is there, that infinite care is needed when it comes to developing technologies that can bypass our skulls and directly manipulate our brains.

It is unlikely, Kathleen Taylor writes, that such technologies will be morally neutral and consequently she does not avoid detailing the likely costs that must be paid for brain supremacy to retain its exalted position.

Read more here about The Brain Supremacy and other titles by this acclaimed author including Brainwashing: The Science of Thought Control and Cruelty. For information about Dr Taylor’s Fellowship at the Institute for Food, Brain and Behaviour, click here.

Rosetta, Peres and Trump: a study in contrasts

On the day that the Rosetta mission reached a deliberate and lonely climax on a distant comet, and Shimon Peres was buried in Israel, we saw the peaking of two great narrative arcs that define so much of the glory of what it means to be human. The first represents another great triumph of science with a research journey further into space than our species has ever ventured while observing in such detail as it flew. The legacy is a mountain of data for scientists to assimilate for decades to come following the last pulse of intelligence from the expired satellite itself.

The second is up there with the Mandela story: Shimon Peres, international statesman and Israeli icon, a man of peace who can bring the planet’s greatest and best to attend his funeral. But like Mandela before him, Peres shines especially as a man whose odyssey took him through violence to an understanding that there is more security and happiness in peace than there is in war. Tough getting there, tough staying there, but worth the effort – and inspiring to everyone who believes that as monkeys became human, so humans may one day become something better yet.

Rosetta and Peres, science and statesmanship, collaborate on this day to remind humanity of the benefits of evolutionary progress.

Agnotologist Donald Trump stands apart from both. He too has become an icon: not of progress and hope but of the wages of ignorance, the triumphs of fear and bias, the submission of means to ends and the subversion of truth to the primacy of the pre-ordained outcome. While he himself represents no triumph of evolution, he at least is prompting reflections on how the human mind works (or doesn’t), particularly in its possible impact on other minds.

Another Donald once bemused the world with his musings on “known knowns” – the things we know that we know. He distinguished them from things we know we don’t know, and the unknown things that remain unknown to us. In ignoring the fourth permutation – the unknown knowns — The Donald that was Rumsfeld ignored the very patron saint of ignorance.

So many things were known to Shimon Peres, and are known to contemporary science that will forever be unknown to Donald Trump. His universe of ignorance remains as bleak and alien and dead as the distant comet with which humanity has at least established a first connection.

Thursday’s guest blog: John Bailey on books


From two great, still-integral Australian minds comes Intelligence Unbound, The Future of Uploaded and Machine Minds, bound neatly into 300 pages of 21 enlightening essays, two introductions and an afterword. The main topics explored are AI, mind uploading and whole-brain emulation, so be prepared for some philosophically discursive views. But what makes this volume so rewarding is its breadth of coverage. It is indispensable to anyone searching for clues on how things might turn out as AI gathers momentum and mind uploading becomes inevitable.

From How Conscience Apps and Caring Computers will Illuminate and Strengthen Human Morality, to Against Immortality: Why Death is Better than the Alternative, there is much in Intelligence Unbound to incite controversy about mind uploading and its consequences.

Apart from being a treasure trove for sci-fi writers looking for new storylines, Intelligence Unbound is eminently readable, enjoyable and expert in its reasoning. It is also much more than the sum of its parts: editors Russell Blackford (Philosopher and Conjoint Lecturer at the University of Newcastle, NSW) and Damien Broderick (PhD in the Literary Theory of the Sciences and Arts from Deakin University) have set out in their comprehensive introductions a clear indication of the enticing chapters to follow.

In bringing together such eminent practitioners as James Hughes, Executive Director of the Institute of Ethics and Emerging Technologies, bioethicist and sociologist at Trinity College, Hartford, and Michael Anissimov, previous manager of the Singularity Summit and Media Director for the Machine Intelligence Research Institute, the editors looked for and got the widest remit on AI, mind uploading and whole-brain emulation.

Relevant experts cover the ethical, philosophical as well as the prudential irrationality of mind uploading in some detail. Their contributions reveal much that is still to be considered about the desirability of a future populated by sub-human replicas. On the other hand as the editors state, it is easy to become trapped by old preconceptions, a trap they have successfully avoided by giving their contributors the widest of remits.

The result is that this collection of philosophers, theorists, futurists, AI researchers, and science-fiction writers offers readers the pros and cons of a variety of intriguing possibilities. Or at least sufficient options to get one’s own mind working – perhaps before it’s too late!

Go to publishers Wiley Blackwell for a description of the publication and its contents, and notes on its editors and contributors. For a further review, click here.


— Guest blogger John Bailey was for many years one of London’s best known journalists and spent most of his career in what was Fleet Street. He is an avid bibliophile and record collector, champion advocate for press freedom, and a student of history whose guided tours of London are known to and fondly recalled by exhausted walkers on all five continents.

Algorithms cannot really know you if you don’t

Count the number of times you notice stories about how the various engines of AI – the algorithms, the machine learning software burrowing ever deeper into the foibles of human behaviour – are getting “to know you better than you know yourself.” What started as variations on “You liked pepperoni pizza, here’s some more” evolved quickly into “People into pizza also love our chocolate ice cream” and on to special deals for discerning consumers of jazz, discounted National Trust membership, and deals on car insurance.

Emboldened as the human animal always is by early success, it was bound to be a small leap for the algo authors to progress beyond bemusing purchasing correlations to more immodest claims. Hence the boast of the data scientist cited in the online journal Real Life, claiming that the consumer is now being captured “as a location in taste space”.

Advocates for algorithms will smile with the rest of us at the anecdote about how the writer’s single imprudent purchase of a graphic novel inspired Amazon into morphing from the whimsy of a one-night stand into the sweating nightmarish stalker from hell; of course, they will claim that algorithms will only get better in inferring desires from behaviours, however seemingly complex.

The writer makes a very good case for doubting this, however, going into some detail on how the various dark promptings of his delight in Soviet cinema of the 1970s are unlikely to excite an algorithmic odyssey to the comedic treatments of pathological sadness in the original Bob Newhart Show.

And yet: the witty sadsack being as likely to emerge in Manhattan as in Moscow, it is not inconceivable that algorithms might evolve to a point of sifting through the frilly flotsams and whimsical whatevers of daily life in multiple dimensions of time and space, to home in on the essentially miserable git who is Everyman. But this is to assume some consistency of purpose to miserable gitness (indeed any manifestations of human behaviour), reckoning that there is no git so miserable that he ceases to know his own mind. And here, Aeon Magazine weighs in to good effect.

There are so many layerings of self and sub-self that implicit bias runs amok even when we run our self-perceptions through the interpretive filters of constantly morphing wishful thinking. So know yourself? It may be that only The Shadow Knows.

Thursday’s Guest Blog: John Bailey on Books

Homo Deus or Homo Data: our future awaits

Dr Yuval Noah Harari of recent Homo Sapiens fame, who has a PhD in History from Oxford and lectures in history at the Hebrew University of Jerusalem, can now add the term futurist to his credentials. And regarding Homo Deus, his latest publishing endeavour, it is perhaps pertinent to know that in 2012 he was awarded the Polonsky Prize for Creativity and Originality in the Humanistic Disciplines. Why?

Because in Homo Deus Dr Harari invites readers to journey back to the future where most homo sapiens will not play even a minor part. His projections include AI running the show, or most of it, without us and developments happening faster and more fundamentally than ever before in history. And this is not science fiction. It is a brilliantly argued view of the future that extends farther down the road we are travelling along already.

The 20th century, he declares, will count for very little, although ironically Dr Harari has to use a medium more than five hundred years old to promote his latest thoughts in print. But however the message in this book is delivered it cannot, must not, be ignored. And don’t be misled by its subtitle, it is far from a Brief History of Tomorrow. Its 400 pages carry a thoughtful and extremely relevant message about the way the world and those that inherit it will have to reconcile themselves with likely developments.

Dr Harari established his writing credentials and much else about thinking anew in Homo Sapiens, his much-acclaimed previous title. This new collection of insights burnishes his reputation further even though his projections vary from provocative to downright alarming. For example, he asks us to imagine that war is obsolete and death overcome, that humanity’s next challenges are likely to be immortality and the powers of creation in fact for Homo Sapiens finally to become Homo Deus. Perhaps more tellingly, he suggests data as the new religion in a world dominated by algorithms and the more efficient superior intelligence of AI – all without consciousness of course!

You have been warned; don’t buy this book at your peril! Go straight to publishers Harvill Secker’s website to hear Dr Harari’s illuminating podcast on Homo Deus and, for a further view and more insights, try this excellent exposition, where almost the full story is presented. As a final treat, read Dr Harari here, in conversation about his vision of our future.


— Guest blogger John Bailey was for many years one of London’s best known journalists and spent most of his career in what was Fleet Street. He is an avid bibliophile and record collector, champion advocate for press freedom, and a student of history whose guided tours of London are known to and fondly recalled by exhausted walkers on all five continents.

Not being a number does not make you a free man

Having listened last week to futurist Yuval Noah Harari talking at London’s Royal Society of Arts about his new book, Homo Deus, I am wondering how a conversation might go between Harari and The Prisoner. Next year will be the 50th anniversary since the iconic television series first broadcast what has become one of the catchcries of science fiction: “I am not a number: I am a free man!” Five decades on, the Guardian review of Harari’s book is sub-titled “How data will destroy human freedom”.

A fundamental difference between Harari’s hugely successful Sapiens and his new book is that the former involves reflections on how humanity has made it this far, whereas the new title is a speculation on the future. The former is rooted in memory; the latter involves conjectures that shift on the sands of uncertain definitions, as the above-linked Guardian review of Harari’s latest book reveals. “Now just hold on there” moments abound, as for example:

Evolutionary science teaches us that, in one sense, we are nothing but data-processing machines: we too are algorithms. By manipulating the data we can exercise mastery over our fate.”

Without having the peculiarities of that “one sense” explained, it is hard to absorb the meaning of words like “nothing”, “manipulating” and “mastery”. Words matter, of course, and there are perils attendant upon concluding too much about human identity through the links that are implicit in lazily assumed definitions.

What happens to the god-fearing woman when she discovers there is no God? Is the workingman bereft when there is no longer any work? If people refuse to accept the imprisonment of numbers assigned to them by other people, are they thus necessarily free? How much is freedom determined not by actions, but by thoughts? And critically: if our thinking is clearer and more careful, can we be more free?

In the Q&A that concluded the RSA event, Harari missed an opportunity when he was asked about the future prospects of education. What will we teach children in the data-driven future of Super Artificial Intelligence? Interestingly, neither maths nor sciences got a mention, and it seemed we might just have to see when the future arrives. But it must be true that a far higher standard in teaching reasoning and critical skills will be essential unless humanity would contemplate an eternal bedlam of making daisy chains and listening to Donovan.

Thursday’s Guest Blog: Bailey on Books

Just when you thought your mind was properly organized, neuroscientist Daniel Levitin pops up with other ideas. In his new book A Field Guide to Lies, Critical Thinking in the Information Age, he extends his theory about thinking straight in an age of information overload.

Publishers Penguin Random House identify Levitin’s new work as “a primer to the critical thinking that is more necessary now than ever 

as we are bombarded with more information each day than our brains can process”.

Levitin shows how to recognize misleading statistics, graphs and written reports, confirming science as the bedrock of critical thinking. Bias, he writes, distorts our information feeds via every media channel, including social media. This means checking plausibility and reasoning, not passively accepting information, repeating it and making decisions based upon it.

Levitin’s accessible writing style, his engaging personality and entertaining delivery gives uninformed readers and researchers alike easy access to new and challenging ideas.


Daniel J. Levitin, one of the world’s most accessible neuroscientists, is Professor of Psychology and Neuroscience at McGill University. His publications include The World in Six Songs, This is your Brain on Music and his New York Times bestseller The Organized Mind.

Levitin’s latest 14-stop US tour that coincides with his book launch on September 6 began in New York. Full details of his tour and his titles can be found here.

Check out reviews on The Organized Mind, The World in Six Songs, and This is Your Brain on Music.


— Guest blogger John Bailey was for many years one of London’s best known journalists and spent most of his career in what was Fleet Street. He is an avid bibliophile and record collector, champion advocate for press freedom, and a student of history whose guided tours of London are known to and fondly recalled by exhausted walkers on all five continents.

Choose: your country back or the future now

It has been a summer of unworthy frenzies, with the forces of conservatism and pessimism crying to have their country back or made great again. On the other side, characterised by the throwbacks as themselves the champions of “Project Fear”, were those who deny that mankind is on a doomed course. More positively, more thoughtfully: they remain adherents to a belief in the powers of education, clear thinking and focused choices. Of moving forward, and not back to our future.

One of the more frequently referenced books in recent weeks has been Progress: Ten Reasons to Look Forward to the Future, by Cato Institute senior fellow Johan Norberg. Favourably cited by Simon Jenkins in The Guardian, and by an editorial in The Economist, Norberg’s book sets out the case for how much, and how rapidly, the world is improving – at least from the perspective of its human masters; how much the case for atavistic pessimism is fed by ignorance and greed (much of it encouraged by a complicit media); and most inspiringly, how the brightness of our future is defined by the potential in our accumulating intelligence. The Economist piece concludes:

“This book is a blast of good sense. The main reason why things tend to get better is that knowledge is cumulative and easily shared. As Mr Norberg puts it, “The most important resource is the human brain…which is pleasantly reproducible.”

By timely coincidence, intelligence both human and artificial has weighed in over the past week with positive expectations on our future. An article in MIT Technology Review is entitled “AI Wants to Be Your Bro, Not Your Foe” – possibly unsettling for those who might see either alternative as equally unsavoury, but its heart is in the right place. It reports on a Stanford University study on the social and economic implications of artificial intelligence, and the currently launching Center for Human-Compatible Intelligence at UC Berkeley. Both initiatives are cognisant of the dangers of enhanced intelligence, but inspired by the vast potential in applying it properly.

For a shot of pure-grade optimism to finish, five inspiring applications of exponential technologies that lit up the recently concluded Singularity University’s Global Summit included one called “udexter”. Artificial intelligence is being deployed to address the challenges of unemployment arising from . . . the advances of artificial intelligence. It promises to counterbalance the decline in “meaningless jobs” by unleashing the world’s creativity.

Collaboration is the new competitive advantage

For years if not for decades, the buzzword in the worlds of innovation and enterprise, echoing through the lecture theatres of business schools and splattered across white boards in investment fund and marketing companies across the globe, has been disruption. Given the evolving world of exponential growth in computing power, artificial intelligence and deep learning, it may well be that the status of has-been is what this awful word “disruption” will soon occupy. Of course it will still exist, but as a by-product of unprecedented advances rather than a reverently regarded target of some small-minded zero-sum game.

Consider an article that appeared a few days ago on the Forbes website, asking rhetorically if the world-shifting changes that humanity requires, and to a large extent can reasonably expect to see, are likely to be achieved with a mind-set that is calibrated to “disrupting” existing markets. For each of the described challenges – climate change, energy storage, chronic diseases like cancer, and the vast horizons of potential in genomics, biosciences, and immunotherapies – collaboration rather than competition is emerging as the default setting for humanity’s operating system.

Mental models based upon cooperative networks will begin replacing the competitive hierarchies that only just managed to keep the wheels on the capitalist engine as it clattered at quickening speeds through our Industrial age, lurching from financial crisis to military conflict and back again, enshrining inequalities as it went and promoting degradations of Earth’s eco-system along the way. And why will things change?

Of course it would be nice to think that our species is at last articulating a more sustaining moral sense, but it won’t be that. It will simply be that the explosion of data, insatiable demands upon our attention, and the febrile anxieties of dealing with the bright white noise of modern life will render our individual primate brains incapable of disrupting anything remarkable to anything like useful effect.

The Forbes article concludes with admiration for what the digital age has been able to achieve, at least as long as the efficacy of Moore’s Law endured. It is soon to be superseded, however, by the emerging powers of quantum and Neuromorphic computing, with the consequent explosion of processing efficiency that will take our collective capabilities for learning and for thinking far beyond the imaginings of our ancient grassland ancestors.

Working together we will dream bigger, and disrupt far less than we create.

Consciousness is not as hard as consensus

Any review of the current writings on consciousness turns up the idea, sooner or later, that it is no longer the hard problem that philosopher David Chalmers labelled it two decades ago. There has been a phenomenal amount of work done on it since, much of it by people who seem pretty clear, in fact, on what it means to them. What is clearly much harder is getting them to agree with one another.

One of the greater controversies of this year was occasioned by psychologist Robert Epstein, declaring in an article in aeon magazine entitled The Empty Brain that brains do not in fact “process” information, and the metaphor of brain as computer is bunk. He starts with an interesting premise, reviewing how throughout history human intelligence has been described in terms of the prevailing belief system or technology of the day.

He starts with God’s alleged infusion of spirit into human clay and progresses through the “humours” implicit in hydraulic engineering to the mechanisms of early clocks and gears to the revolutions in chemistry and electricity that finally gave way to the computer age. And now we talk of “downloading” brains? Epstein isn’t having it.

Within days of his article’s appearance came a ferocious blowback, exemplified by the self-confessed “rage” of software developer and neurobiologist Sergio Graziosi’s response, tellingly entitled “Robert Epstein’s Empty Essay”. The earlier failures of worldly metaphor should not entail that computer metaphors are similarly wrong-headed, and Graziosi provides a detailed and comprehensive review of the very real similarities, his anger only sometimes edging through. He also provides some useful links to other responses to Epstein’s piece, for people interested in sharing his rage.

For all this, the inherent weakness in metaphor remains: what is like, illuminates; but the dissimilarities obscure. The brain’s representations of the external world are not designed for accuracy, but are evolved for their hosts’ survival. Making this very point, in a more measured contribution to the debate, is neuroscientist Michael Graziano writing in The Atlantic. “A New Theory Explains How Consciousness Evolved”, tackles the not-so-hard problem from the perspective of evolutionary biology.

He describes how his Attention Schema Theory would be evolution’s answer to the surfeit of information flowing into the brain – too much to be, he says, “processed”. Perhaps, in the context of the Epstein/Graziosi dispute, he might better have said “assimilated”. Otherwise, full marks and thanks to Graziano.