Tag Archives: Intelligence

Intelligence does not grow in a petri dish

As there are neither agreed rules nor a generally accepted definition of intelligence, nor is there a consensus on what consequences of human or machine behaviour betokens intelligence that is either natural or artificial, it will be difficult to measure the surpassing of any point of singularity when machine intelligence matches and exceeds our own.

It may well prove to be the case that, when we think we have got there, we will have supreme exemplars on both sides of the bio/non-biological intelligence divide asking us if it any longer matters. And as our species approaches the moment of truth that may never obviously arrive, there will be a growing chorus of voices worrying if a bigger question than the definition of intelligence is the definition of the good human, when so much of what we might see as intelligence in its natural state is perverted in the course of action by the festering agency of the seven deadly sins, animated by fear and enabled by ignorance.

Given the wide range of environments within which intelligence can reveal itself, and the vast spectrum of actions and behaviours that emerge within those environments, it may be the very definition of the fool’s errand to attempt an anywhere anytime definition of intelligence itself. We can learn only so much by laboratory-based comparisons of brains and computers, for example, balancing physiological correlations in the one with mechanistic causations in the other.

Glimmerings of clarity emerge only when one intelligent agent is pitched against another in a task-oriented setting, the victory of either one being equated with some sense of intelligence superiority when all that has happened is that an explicit task orientation is better addressed when the parameters of the task can be articulated.

What appears to distinguish human intelligence in the evolutionary sense is the capability to adapt not only in the face of threats and existential fear, but in anticipation of imagined projections of all manner of dangers and terrors. We hone our intelligence in facing down multiple threats; we achieve wisdom by facing down the fear that comes with being human.

Fear is not innate to the machine but it is to us, as Franklin D Roosevelt understood. However machines progress to any singularity, humanity’s best bet lies in understanding how the conquering of fear will enhance our intelligence and our adaptive capabilities to evolve through the singularity, and beyond.

Rosetta, Peres and Trump: a study in contrasts

On the day that the Rosetta mission reached a deliberate and lonely climax on a distant comet, and Shimon Peres was buried in Israel, we saw the peaking of two great narrative arcs that define so much of the glory of what it means to be human. The first represents another great triumph of science with a research journey further into space than our species has ever ventured while observing in such detail as it flew. The legacy is a mountain of data for scientists to assimilate for decades to come following the last pulse of intelligence from the expired satellite itself.

The second is up there with the Mandela story: Shimon Peres, international statesman and Israeli icon, a man of peace who can bring the planet’s greatest and best to attend his funeral. But like Mandela before him, Peres shines especially as a man whose odyssey took him through violence to an understanding that there is more security and happiness in peace than there is in war. Tough getting there, tough staying there, but worth the effort – and inspiring to everyone who believes that as monkeys became human, so humans may one day become something better yet.

Rosetta and Peres, science and statesmanship, collaborate on this day to remind humanity of the benefits of evolutionary progress.

Agnotologist Donald Trump stands apart from both. He too has become an icon: not of progress and hope but of the wages of ignorance, the triumphs of fear and bias, the submission of means to ends and the subversion of truth to the primacy of the pre-ordained outcome. While he himself represents no triumph of evolution, he at least is prompting reflections on how the human mind works (or doesn’t), particularly in its possible impact on other minds.

Another Donald once bemused the world with his musings on “known knowns” – the things we know that we know. He distinguished them from things we know we don’t know, and the unknown things that remain unknown to us. In ignoring the fourth permutation – the unknown knowns — The Donald that was Rumsfeld ignored the very patron saint of ignorance.

So many things were known to Shimon Peres, and are known to contemporary science that will forever be unknown to Donald Trump. His universe of ignorance remains as bleak and alien and dead as the distant comet with which humanity has at least established a first connection.

Consciousness is not as hard as consensus

Any review of the current writings on consciousness turns up the idea, sooner or later, that it is no longer the hard problem that philosopher David Chalmers labelled it two decades ago. There has been a phenomenal amount of work done on it since, much of it by people who seem pretty clear, in fact, on what it means to them. What is clearly much harder is getting them to agree with one another.

One of the greater controversies of this year was occasioned by psychologist Robert Epstein, declaring in an article in aeon magazine entitled The Empty Brain that brains do not in fact “process” information, and the metaphor of brain as computer is bunk. He starts with an interesting premise, reviewing how throughout history human intelligence has been described in terms of the prevailing belief system or technology of the day.

He starts with God’s alleged infusion of spirit into human clay and progresses through the “humours” implicit in hydraulic engineering to the mechanisms of early clocks and gears to the revolutions in chemistry and electricity that finally gave way to the computer age. And now we talk of “downloading” brains? Epstein isn’t having it.

Within days of his article’s appearance came a ferocious blowback, exemplified by the self-confessed “rage” of software developer and neurobiologist Sergio Graziosi’s response, tellingly entitled “Robert Epstein’s Empty Essay”. The earlier failures of worldly metaphor should not entail that computer metaphors are similarly wrong-headed, and Graziosi provides a detailed and comprehensive review of the very real similarities, his anger only sometimes edging through. He also provides some useful links to other responses to Epstein’s piece, for people interested in sharing his rage.

For all this, the inherent weakness in metaphor remains: what is like, illuminates; but the dissimilarities obscure. The brain’s representations of the external world are not designed for accuracy, but are evolved for their hosts’ survival. Making this very point, in a more measured contribution to the debate, is neuroscientist Michael Graziano writing in The Atlantic. “A New Theory Explains How Consciousness Evolved”, tackles the not-so-hard problem from the perspective of evolutionary biology.

He describes how his Attention Schema Theory would be evolution’s answer to the surfeit of information flowing into the brain – too much to be, he says, “processed”. Perhaps, in the context of the Epstein/Graziosi dispute, he might better have said “assimilated”. Otherwise, full marks and thanks to Graziano.

Bear necessities of augmented intelligence

A favourite joke involves two barefoot guys in the forest who spot a bear off in the distance, but lumbering towards them and closing fast. They have to run for it but one stops first to put on his running shoes. The other laughs and asks if the running shoes are going to enable him to outrun the bear. Finishing his lacing up, the first guys smiles and says: “I don’t need to outrun the bear; I just need to outrun you.”

As to the anticipated challenges of matching the powers of Artificial Intelligence when it supersedes the capacities of what humans can do – when it graduates to “Super” status and leaves humanity floundering in its dust: we may take a lesson from the bear joke and wonder if there is a middle way.

It appears from all the media commentary that we have little idea now as to how to design SAI that will not obliterate us, whether by accident, or through its own malign design, or by some combination possibly exacerbated by the purposeful interventions of thoroughly malign humans. Can we at least get smart enough to do what we cannot yet do, and find a way of programming SAI to help us solve this?

Without getting into the philosophical difficulties of bootstrapping something like intelligence, two things are clear. We must get smarter than we are if we are to solve this particular problem; and brains take a long time to evolve on their own. We need an accelerant strategy, and it will take more than brain-training. Research must proceed more quickly in brain-computer interfaces, nano-biotechnology and neuropharmacology, and in the sciences of gene editing and  deep-brain stimulation. While research into several of these technologies has been driven by cognitive impairments such as movement disorders and the treatment of depression, their capabilities in areas of potential enhancement of cognitive function are attracting greater interest from the scientific and investment communities. It is definitely becoming a bear market.

Re-purposing old brain pathways to new ends

Against the vast sweep of evolutionary time, the mere c8,000 human generations through which our species has evolved will not have seen a huge change in the interplay of physics, chemistry and biology that distinguish the functioning of the human brain. Our first old grandpa standing on his ancient plain will have gazed at the moon with something pretty close to the neural equipment we now possess. His first thoughts will have been for the importance of moonlight in betraying the predator or illuminating the water hole.

Later on, in appreciating the moon’s significance, his anthropomorphising imagination would divine a personality to whom appropriate propitiations might be directed to ensure a healthy crop – or so he would imagine. It would be another 7,998 generations before that same brain would figure out to put us on the moon that instigated all this lunar thinking in the first place.

Brain scientists at Carnegie Mellon University have moved us closer to grasping how it is that old brains can surpass old dogs in learning new tricks. Enhanced scanning technologies have enabled researchers at the university’s Scientific Imaging and Brain Research Center to determine how specific areas of the brain, once attuned to meeting the challenges of survival, can now be identified as the same areas that grasp the principles of advanced physics. It appears, for example, that the same neural systems “that process rhythmic periodicity when hearing a horse gallop also support the understanding of wave concepts” – wavelengths, sound and radio waves – in physics. It is thought that such knowledge will improve the teaching of science.

The CMU team are by no means the only scientists pursuing this line of enquiry. A selection of stories gleaned from just a few days’ of monitoring the brain science media reveal one study that “finds where you lost your train of thought”; another that believes it can pinpoint where personality resides in the brain (that would be the frontoparietal network); and yet another that can distinguish you by reading your brainwaves with no less than 100% accuracy. And this is all before we get into the vast phantasmagoria of scientific explorations of the human brain on magic mushrooms, ecstasy and LSD. And we will leave music for another day.

What if women designed more AI?

Much of the commentary on the overlap between robots and women is depressingly reflective of male needs and sensitivities, and what all of this might mean for men. It can seem endless, from all the commentary on ExMachina and its ilk, through tabloid wails about “bonkbots” that turn men into misogynist monsters; and on to the frenzy that provoked Microsoft’s recent TayBot debacle and inspired one of our more recent blogs here on BAM!

One of the more thoughtful articles on this topic is this review on Quartz. Against the unsettling statistic that women achieving American undergraduate degrees in computer science tops out around 20%, the article contains a series of reflective comments from a few of the women who have made it. Where more humanistic objectives are undervalued in favour of technocratic, adversarial ones (“My robot can whup your robot . . .”) it is not surprising that the public perception grows that the future of AI is not about service benefits but about the risks of robots run amok. In the face of these more apocalyptic scenarios, the Quartz review offers a heartening summary of projects being directed by women working with kinder, gentler robots.

Last week saw the publication of a sobering article in The Guardian, assuring us that “the tech industry wants to use women’s voices . . .” (i.e. for the Siris and Cortanas of the world) “. . . but “they just won’t listen to them” (i.e. in working out how Siri et al will respond to real-life queries about, say, sexual assault). And so it will continue as long as “she” wants to be listened to by people asking questions that “she” does not understand. “Her” outputs clearly need more female inputs.

Could genius be genetically engineered?

A fascinating discussion is bound to develop around this topic when your chosen panel includes anthropologist/psychologist Stephen Pinker and two professorial colleagues such as Dalton Conley (social sciences) and Stephen Hsu (theoretical physics). The resulting video, filmed last week at New York City’s 92nd Street Y, would have been more streamlined than its one hour+ if the moderator had actually moderated rather than using the panel as a skittles run for his own theorising, but some key reflections couldn’t help bursting through.

Even the specialist panellists have been surprised at the pace of the science over the last few years, and at the resulting, wider accessibility of gene editing technology as the knowledge spreads and costs plummet. On the basis that nature has won any nature v nurture debate and that any feature of a living organism that has a genetic foundation can be genetically engineered to enhance or diminish the effect of the gene or genes, it would appear that the answer to the headline question will be yes, even if it is not happening yet. More pressing than the “could it” question will be those questions more related to ethics and values than to data and science. These include “will it?”, “by whom?”, and “to what purpose?”

How we manipulate both nature and nurture in enhancing intelligence is familiar to people who make choices of smarter mates, better schools and healthier lifestyles. In considering the viability of genetic engineering as just another choice to be made, we will be considering the same filters of benefit over risk, fairness, accessibility, and the opportunity cost to our species of not taking a significant opportunity if and when it presents itself. Would we hobble ourselves if the price were our own extinction?

Extending mind through technology: not yet

An article in the Huffington Post speculates on the potential for technological devices to act as extensions of mind, inasmuch as mental activity that used to take place within the human skull is now effectively outsourced to The Cloud or to a small constellation of memory retrieval and calculating devices.

The Extended Mind hypothesis of David Chalmers and Andy Clark is invoked in support of the idea that the outsourcing of certain activities of mind bestows a kind of mind identity on the external device, when really all that has happened is that a slave has been identified to assume a function, creating the usual sort of co-dependency that typifies master-slave relationships.

But the fact that an external technological device has taken on a function previously assumed by the mind does not entail that the device itself becomes a mind, or an extension of a mind. A memory aid is a memory aid. When it can assume more of the contextualising work previously done by the mind in understanding the what or why of the memory or the sum, it might make more sense to accord it a status upgrade.

Beware pre-Singularity marketing babble

Whatever forecasts are made for the moment when machine intelligence exceeds the power of human intelligence, one can be sure that the hypesters will be out ahead of the science, babbling of capabilities achieved before they have even been thought through properly. An early example of this phenomenon can be found on a blog created by change consultants Frost & Sullivan. Its author claims that increasingly sophisticated research tools are going to enable us “to search greater numbers of documents and sources and pull out greater insight more quickly”.

We should add this word insight to the lexicon of terms over which care must be taken in navigating the evolution of machine intelligence: words like consciousness, reflection, wisdom – even intelligence itself. Insight is wisely seen as a penetrative understanding of the true nature of something set in a potentially fathomless context of complexity. Maybe algorithms will one day be capable of plumbing the depths of such complexity, but it’s way too early now to be talking of “Insight-on-Demand”.

Tellingly, this blog refers clumsily to an early example of intelligent search software as being “a canary down the mine for researchers”. The context suggests that what is meant by the metaphor is “pre-cursor” to better software, when in truth the canary in history served as a warning. The danger in assuming too much about algorithmic search, however sophisticated, lies in the potential for suspending critical thought out of deference to software that, however intelligent, is not yet sufficiently wise.

Rising IQs and belief in God

Evidence accumulates to the effect that notwithstanding junk food, reality TV and celebrity culture, humanity is actually getting smarter. The Flynn Effect to which this article refers mentions the phenomenon associated with New Zealand-based philosopher James Flynn, who has built a thoughtful and engaging career on his early observation about rising IQs worldwide over the course of the last century.

Flynn’s work distinguishes crystallised intelligence – what we learn over time, and how we apply that learning – from fluid intelligence, which involves abstract thinking and reasoning. Flynn identifies the latter kind of intelligence as the engine room of humankind’s increasing cognitive strength.

Ironic smiles will be inspired in those who recall the famous Atlantic article, “Is Google Making Us Stupid?”, still resonant seven years after it appeared. If our “fluid IQs” are indeed rising, it may be that we are learning to play to our strengths. The irony would be that this is happening as Narrow AI is turning us into stupider versions of machines whose data manipulation and information retrieval skills so far exceed the crystallised cognitive capabilities of our own, attention deficit-disordered brains.

Given Flynn’s inverse correlation of rising IQs to declining belief in God, the question to ask is: if AI evolves from what is programmed into Narrow AI towards the Super AI that will think for itself, will a defining characteristic of mature AI be its atheism?