Tag Archives: Brain Science

Intelligence does not grow in a petri dish

As there are neither agreed rules nor a generally accepted definition of intelligence, nor is there a consensus on what consequences of human or machine behaviour betokens intelligence that is either natural or artificial, it will be difficult to measure the surpassing of any point of singularity when machine intelligence matches and exceeds our own.

It may well prove to be the case that, when we think we have got there, we will have supreme exemplars on both sides of the bio/non-biological intelligence divide asking us if it any longer matters. And as our species approaches the moment of truth that may never obviously arrive, there will be a growing chorus of voices worrying if a bigger question than the definition of intelligence is the definition of the good human, when so much of what we might see as intelligence in its natural state is perverted in the course of action by the festering agency of the seven deadly sins, animated by fear and enabled by ignorance.

Given the wide range of environments within which intelligence can reveal itself, and the vast spectrum of actions and behaviours that emerge within those environments, it may be the very definition of the fool’s errand to attempt an anywhere anytime definition of intelligence itself. We can learn only so much by laboratory-based comparisons of brains and computers, for example, balancing physiological correlations in the one with mechanistic causations in the other.

Glimmerings of clarity emerge only when one intelligent agent is pitched against another in a task-oriented setting, the victory of either one being equated with some sense of intelligence superiority when all that has happened is that an explicit task orientation is better addressed when the parameters of the task can be articulated.

What appears to distinguish human intelligence in the evolutionary sense is the capability to adapt not only in the face of threats and existential fear, but in anticipation of imagined projections of all manner of dangers and terrors. We hone our intelligence in facing down multiple threats; we achieve wisdom by facing down the fear that comes with being human.

Fear is not innate to the machine but it is to us, as Franklin D Roosevelt understood. However machines progress to any singularity, humanity’s best bet lies in understanding how the conquering of fear will enhance our intelligence and our adaptive capabilities to evolve through the singularity, and beyond.

Collaboration is the new competitive advantage

For years if not for decades, the buzzword in the worlds of innovation and enterprise, echoing through the lecture theatres of business schools and splattered across white boards in investment fund and marketing companies across the globe, has been disruption. Given the evolving world of exponential growth in computing power, artificial intelligence and deep learning, it may well be that the status of has-been is what this awful word “disruption” will soon occupy. Of course it will still exist, but as a by-product of unprecedented advances rather than a reverently regarded target of some small-minded zero-sum game.

Consider an article that appeared a few days ago on the Forbes website, asking rhetorically if the world-shifting changes that humanity requires, and to a large extent can reasonably expect to see, are likely to be achieved with a mind-set that is calibrated to “disrupting” existing markets. For each of the described challenges – climate change, energy storage, chronic diseases like cancer, and the vast horizons of potential in genomics, biosciences, and immunotherapies – collaboration rather than competition is emerging as the default setting for humanity’s operating system.

Mental models based upon cooperative networks will begin replacing the competitive hierarchies that only just managed to keep the wheels on the capitalist engine as it clattered at quickening speeds through our Industrial age, lurching from financial crisis to military conflict and back again, enshrining inequalities as it went and promoting degradations of Earth’s eco-system along the way. And why will things change?

Of course it would be nice to think that our species is at last articulating a more sustaining moral sense, but it won’t be that. It will simply be that the explosion of data, insatiable demands upon our attention, and the febrile anxieties of dealing with the bright white noise of modern life will render our individual primate brains incapable of disrupting anything remarkable to anything like useful effect.

The Forbes article concludes with admiration for what the digital age has been able to achieve, at least as long as the efficacy of Moore’s Law endured. It is soon to be superseded, however, by the emerging powers of quantum and Neuromorphic computing, with the consequent explosion of processing efficiency that will take our collective capabilities for learning and for thinking far beyond the imaginings of our ancient grassland ancestors.

Working together we will dream bigger, and disrupt far less than we create.

Consciousness is not as hard as consensus

Any review of the current writings on consciousness turns up the idea, sooner or later, that it is no longer the hard problem that philosopher David Chalmers labelled it two decades ago. There has been a phenomenal amount of work done on it since, much of it by people who seem pretty clear, in fact, on what it means to them. What is clearly much harder is getting them to agree with one another.

One of the greater controversies of this year was occasioned by psychologist Robert Epstein, declaring in an article in aeon magazine entitled The Empty Brain that brains do not in fact “process” information, and the metaphor of brain as computer is bunk. He starts with an interesting premise, reviewing how throughout history human intelligence has been described in terms of the prevailing belief system or technology of the day.

He starts with God’s alleged infusion of spirit into human clay and progresses through the “humours” implicit in hydraulic engineering to the mechanisms of early clocks and gears to the revolutions in chemistry and electricity that finally gave way to the computer age. And now we talk of “downloading” brains? Epstein isn’t having it.

Within days of his article’s appearance came a ferocious blowback, exemplified by the self-confessed “rage” of software developer and neurobiologist Sergio Graziosi’s response, tellingly entitled “Robert Epstein’s Empty Essay”. The earlier failures of worldly metaphor should not entail that computer metaphors are similarly wrong-headed, and Graziosi provides a detailed and comprehensive review of the very real similarities, his anger only sometimes edging through. He also provides some useful links to other responses to Epstein’s piece, for people interested in sharing his rage.

For all this, the inherent weakness in metaphor remains: what is like, illuminates; but the dissimilarities obscure. The brain’s representations of the external world are not designed for accuracy, but are evolved for their hosts’ survival. Making this very point, in a more measured contribution to the debate, is neuroscientist Michael Graziano writing in The Atlantic. “A New Theory Explains How Consciousness Evolved”, tackles the not-so-hard problem from the perspective of evolutionary biology.

He describes how his Attention Schema Theory would be evolution’s answer to the surfeit of information flowing into the brain – too much to be, he says, “processed”. Perhaps, in the context of the Epstein/Graziosi dispute, he might better have said “assimilated”. Otherwise, full marks and thanks to Graziano.

Humanity on the cusp of enhancement revolution

An article from the Pew Research Center takes a long look at the subject of Human Enhancement, reviewing the “scientific and ethical dimensions of striving for perfection.” The theme of transhumanism is getting a lot of media attention these days, and it was no surprise when Nautilus weighed in, capturing the ambivalence over aging in one issue three months ago when explaining “why aging isn’t inevitable”, while another article in the same issue argued that it is physics, not biology, that makes aging inevitable because “nanoscale thermal physics guarantees our decline, no matter how many diseases we cure.” Hmmm . . .

Taking another perspective, a third Nautilus feature speculated on the old “forget adding years to life, focus on adding life to years” chestnut, asking if the concept of “morbidity compression” might mean that 90 will “become the new 60.”

On the day that this year’s Olympics kick off in Brazil, we can conclude our round-up of key articles with a fascinating contribution to the enhancement debate by Peter Diamandis of SingularityHUB, speculating on what Olympian competition might be like in the future “when enhancement is the norm.” And it is this last headline link that brings into sharp focus the major point on which most media commentaries on enhancement agree: the key word is “norm”.

Enhancement is in the natural order of things and never really manifests itself as a choice so long as it remains evolutionary: that is, moving so slowly that nobody much notices it. When change explodes with such momentum that nobody can fail to notice it, it begins to settle into being a new normal. And as Diamandis concludes his extended thought experiment on what is happening with a quick spin through synthetic biology, genomics, robotics, prosthetics, brain-computer-interfaces, augmented reality and artificial intelligence, he concludes almost plaintively:

“We’ve (kind of) already started . . .”

As indeed we have. In today’s Olympian moment we can note that whether or not human enhancement was part of “God’s plan” (as per the weakest section of the Pew article) the idea of Faster, Higher, Stronger certainly figured in the plans of Baron de Coubertin. Now, can this also mean Smarter? Left hanging in the otherwise excellent Pew piece is the question if a “better brain” might enable a better mind or, at least, a higher capacity for clearer and more creative thinking. Can we move our thinking up a gear?

Could medical research be more adventurous?

An article last month on the Nautilus website posed an interesting question: “Why is Biomedical Research So Conservative?” Possible answers were summed up in the sub-headline: “Funding, incentives, and scepticism of theory make some scientists play it safe.” This is not to say that there is insufficient imagination going into the research application of advances in machine learning and data management in improving health outcomes. In fact, another article came out about the same time on the Fast Company website, discussing “How Artificial Intelligence is Bringing us Smarter Medicine”. It distinguished a host of impressive advances under five significant headings: drug creation, diagnosing disease, medication management, health assistance, and precision medicine.

People could say ah yes: well, that’s machine smarts at the applied rather than the theoretical end of medical research, although that is less true in the first and last of these categories: supercomputers are very much engaged in the analysis of molecular structures from which it is hoped new therapies will emerge, and in the vast data sets which are being created within the science of genomics as we move into a new era of precision, personalised medicine.

Nevertheless, it does seem that pure research in biology – as distinct from physics – has been playing it comparatively safe, and the Nautilus article provides the evidence for its ruminations. Natural language processing analysis of no fewer than two million research papers, and 300,000 patents arising from work with 30,000 molecules, showed a remarkable bias towards conditions common in the developed world and with an emphasis on areas where the research roads are already well travelled – predominantly in cancer and heart disease.

My own communications work in the dementia environment – specifically Alzheimer’s disease – suggests that another reason may be in play. Where medical research has been less conservative, more adventurous, and broadly more successful, there has been more collaboration and shared excitement around a commonly perceived mission. The more open-source, zealous, and entrepreneurial eco-system that has applied for decades in heart and in cancer research – and we see this now on steroids in the field of artificial intelligence developments – has yet to capture the imagination of the wider biomedical community, where the approach remains generally more careful, more academic and inward-looking: just, more conservative.

Delusions of immortality

Although the Daily Mail is not a hotbed of deep thinking on the biology of aging and the potential of human immortality, the fact that it has produced a long feature on the digital uploading of brains is indicative of how the topics of human and artificial intelligence are becoming mainstream matters of interest. Typically, the article’s use of the terms “brain” and “mind” are pretty much interchangeable. There is no consideration of how any one person’s mind might be anything more than the animation of thoughts and feelings within the three pounds of blancmange that reside within our skulls.

Among the unchallenged and carelessly crafted assumptions driving this piece, the biggest is the absence of any reflection on the question of identity in considering the possibility of the immortality of the mind. In short, in what sense does Bill Bloke remain Bill Bloke when he is uploaded to the computer, or reconstituted through stem cell interventions on brains maintained on life support, or reanimated after some sort of cryonic limbo?

From all of these rapidly evolving technologies it is clear that something is going to emerge that is distinct from anything our world has to show now, even if marketing and wishful thinking will ensure that the early stages of this febrile new world are a grab bag of simulations, avatars, holograms, and downright hoaxes. But however many iterations of the real deal we evolve to throw in to that grab bag, nowhere in its darkness will we find good old Bill himself. Why is that?

One of our most significant cognitive biases is the apparently all-encompassing reality of the here and now: our brain perceives the world as a ticking and wholly immersive real thing from which our brains and our minds are separate phenomena. Except that they aren’t. Whatever the chances of the existence of parallel universes, the fact is that we are 7 billion people spinning along in this one, with a definite sense of July 2016ishness about this world we think we know. Geographic relocation or a short time in a coma can convey a sense of immense disorientation, but the times and places and people that collectively define us and shape our minds would all be absent from the new reality of the reconstituted Bill: awakened to a new place, a new time, and a new world of eternal bemusement.

Bear necessities of augmented intelligence

A favourite joke involves two barefoot guys in the forest who spot a bear off in the distance, but lumbering towards them and closing fast. They have to run for it but one stops first to put on his running shoes. The other laughs and asks if the running shoes are going to enable him to outrun the bear. Finishing his lacing up, the first guys smiles and says: “I don’t need to outrun the bear; I just need to outrun you.”

As to the anticipated challenges of matching the powers of Artificial Intelligence when it supersedes the capacities of what humans can do – when it graduates to “Super” status and leaves humanity floundering in its dust: we may take a lesson from the bear joke and wonder if there is a middle way.

It appears from all the media commentary that we have little idea now as to how to design SAI that will not obliterate us, whether by accident, or through its own malign design, or by some combination possibly exacerbated by the purposeful interventions of thoroughly malign humans. Can we at least get smart enough to do what we cannot yet do, and find a way of programming SAI to help us solve this?

Without getting into the philosophical difficulties of bootstrapping something like intelligence, two things are clear. We must get smarter than we are if we are to solve this particular problem; and brains take a long time to evolve on their own. We need an accelerant strategy, and it will take more than brain-training. Research must proceed more quickly in brain-computer interfaces, nano-biotechnology and neuropharmacology, and in the sciences of gene editing and  deep-brain stimulation. While research into several of these technologies has been driven by cognitive impairments such as movement disorders and the treatment of depression, their capabilities in areas of potential enhancement of cognitive function are attracting greater interest from the scientific and investment communities. It is definitely becoming a bear market.

Time has ticked a heaven round the stars

In the green fuse of the young Dylan Thomas’ imagination we find the perfect description of how humanity’s knowledge of the universe has proceeded from mute awe to a better but still imperfectly informed wonder. All it took was time – and science – and our notion of heaven was transformed from the clumsy metaphor of celestial theme park to something far richer, more vast and various, and beautiful beyond comprehension. And most wondrous of all, we humans are not only actively immersed within this heaven, albeit on the nanoscale, but we are conscious of being so, and of being so in the here and now.

It is easy for us to see today how religions ignite. While all other species seemed happy to proceed from meal to meal without any need for meaning along the way, Homo sapiens has sought explanations, patterns, and a sense of its place above and beyond the brutish rants and ruttings of everyone’s daily lives. Given what we knew about what we flattered ourselves to suppose was the universe two thousand years ago, it is not surprising that the revelations and rules comprising the Pentateuch, Bible and Koran emerged as the defining Operating Manual for Life on Earth. And in understanding more now about what we didn’t know then, and given our inbred venalities and credulity, it is even less surprising that these religions caught on.

With what has happened over the last two millennia – and in science and technology, what has transpired particularly over the last two centuries – it would be stranger if anyone were now to propose one of the “great faiths” as a credible belief system for today’s world. (Although, as a reminder of the limpet-like tenacity of human credulity, it is less than two centuries since the appearance of the “Book” of Mormon.) But on balance, our cognitive horizons continue to expand and, with them, our aspirations for new frontiers of intelligence and wonder in an enlarging universe. We may not wonder at the answers conceived by religion in the infancy of our species but, in our progress beyond I Corinthians 13:11 to the irony of John 8:32, we open up new vistas of potential in the flowering of human intelligence. The truth can indeed set us free, although perhaps not in the manner that the Jesus of scripture intended.

Consciousness: scanning & thinking get us closer

There are lots of smart philosophers around and they don’t agree on a definition of consciousness. Daniel Dennett thinks it is an illusion: “our brains are actively fooling us”. It is nothing like an illusion for David Chalmers, who sees it as being so impenetrable to our understanding as to be THE Hard Problem of humanity – although not so hard as to be beyond joking about it: he has also characterised it as “that annoying time between naps”. More recently, in The New York Times, Galen Strawson was similarly light-hearted in analogizing Louis Armstrong’s response to a request for a definition of jazz: “If you gotta ask, you’ll never know.” More seriously, he offers the thought that we know what it is because “the having is the knowing”. While begging the question if this is equally true of cats and people, the musings of the Dennetts, Chalmers’ and Strawsons of the world make it clear that anyone who thinks that philosophy is dead is certainly insensible, if not downright nuts.

Perhaps what makes the problem hard is the attempt to define it from within it. To borrow from Strawson, can we understand consciousness better through examining the border between having it and not having it? What is the catalyst that tips it from not being into being? Research out of Yale and the University of Copenhagen may have nudged us closer to an answer. Using PET scanners to measure the metabolising of glucose in the brains of comatose patients, researchers were able to predict with 94% accuracy which ones would recover consciousness. It appears that the “level of cerebral energy necessary for a person to return to consciousness (is) 42%.” Bemused readers of The Hitchhiker’s Guide to the Galaxy will grasp the significance of the number 42 as the “Answer to the Ultimate Question of Life, the Universe, and Everything.”

AI, lucid dreams and electric bananas

As we develop our ideas about brain science and machine learning, it’s easy to sense that the limitations of language confuse talk of human and machine intelligence as if they were the same. That they are not is most obvious when we talk about dreams.

An article appeared on the Slate website last summer, asking: “Do Androids Dream of Electric Bananas?” More intriguing was the strapline: “Google’s DeepDream is dazzling, druggy, and creepy. It’s also the future of AI”. What almost inspired a BAMblog at the time was a quoted comment mistakenly describing DeepDream’s creative output as “kitsch” – an aesthetic judgement – when it more properly should be described as derivative – a functional judgement. The resulting “So what?” being left unanswered, the blog went unwritten and the idea was put to sleep.

A small riot of stories over the past couple of weeks have revived the idea, however, while illustrating how advances in neuro-imaging are enhancing our understanding of the correlations between stimulus and response in the human brain. We learn of the semantic atlas at the University of California at Berkeley, mapping the parts of the brain that react to words and ideas, and we read about the study at Imperial College, London of the “Brain on LSD”, with its intriguing sidebar on the enhancement of the experience when music is added to the mix.

Coming along as a companion to the Imperial story, a piece on Psychology Today reveals “How to have a mystical experience”, suggesting that the conscious release of analytic control to our emotionally-driven limbic promptings can induce a sense of consciousness without boundaries, cosmic unity, and transcendence . . . maa-an. Topping it all is no less eminent an authority than a BuzzFeed listicle with “15 lucid dreaming facts that will make you question reality”. Interesting but unlikely, that; although it serves to remind us that as creatures who can think about our thinking and be conscious that we are dreaming, we are doing things that AI cannot do. Yet.