As our intelligence develops, so formal religion recedes

An article in The Wall Street Journal last weekend gave organised religion a rather gentle ride, suggesting that while it is declining it could make a comeback if times get tough again. Philosopher Daniel Dennett, a long-time critic of religion but here in his Sunday best, suggested that huge increases in self-described “Unaffiliateds” could be ascribed to greater prosperity and comparative security for the majority of the global population. He speculated that the trend might in the same vein be reversed if mankind were beset with a rash of natural calamities, or war.

It must be true that religion is a natural refuge in the face of adversity, and that good times will make people feel more secure, if not complacent. We certainly will be hearing all about it if the day comes when people are being mass-murdered by their iPhones, and God’s variously chosen peoples are reminding us of the wages of tinkering with His Will.

Yet for all that we can agree with Dennett, what is becoming a more powerful motivator than fear in the retreat from formal religion are higher standards of education around the world. Not only do we know more, but we are wiser in how we use this learning. It is remarkable that where one type of explanation is superseded by another, it is always the theistic that gives way to the scientific – not the other way around. In our understandings of history, anthropology and neurology, we are coming to grips with how our thoughts and emotions would collide in humankind’s early and comparatively ignorant collisions with nature.

AI? Brain intelligence? What else comes to hand?

What if survival depends upon more than intelligence as manifested in what neuroscience identifies as the human brain? Or to turn it around somewhat, to what extent may intelligence alone be insufficient to guarantee survival? It might be helpful in our continuing deliberations on the evolving balance between human and Artificial Intelligence that we ask if the parameters of the former “artificially” narrow the terms of reference by equating our intelligence with what goes on within, and only within, the human skull?

We will leave for another time deliberations on extended cognition, considerations of the brain-in-the-world, and the impact on our thinking of the working of our gut.

It is the role of the hand in the evolution of human intelligence that might possibly inspire greater deliberation than it does, and not just because the gap between human and chimpanzee hand functioning is greater than that between human and chimpanzee brains. (For the doubters, and especially those who fancy the curious notion that a monkey, given enough time on a typewriter, might tap out Hamlet: consider how long we will wait for it to play us some Rachmaninov.)

Going back at least as far as Samuel Taylor Coleridge and coming up to today and the work of such philosophers as Colin McGinn and Raymond Tallis, there has been a great deal – though arguably not enough – fascinating speculation on hands and the role they have had in shaping our role as thinking moral agents.

BAMBlog author selfie
BAMBlog author selfie

Human Brain Project getting a makeover: will it be enough?

Last month’s news about the European-led Human Brain Project (HBP), summarised in The Scientific American with the title “Human Brain Project Gets a Rethink”, concluded years of growing disquiet among the European neurology community about the aims of the project (too narrow, not enough biology) and its governance (too autocratic). In what seems as much a makeover as a rethink, an independent committee identified the correctives that may help. Whatever our species may or may not know about brain science, we can get our heads around good governance.

The problem remains that we are much less steady with the brain science. It is often pointed out that the moon programme of half a century ago required a massive focusing of will and resources but, comparatively speaking, not a lot of new science. The complexities of rocket science have always been joked about as inaccessible to the common man, but a lot of uncommon men and women knew what they needed to know, as Neil Armstrong was pleased to discover.

The biological bridge between the physiology and workings of the brain, and the psychology of behavioural outputs, however, remains shrouded in mist. There is no informing theory regarding the transition between electrical stimulations within the blancmange between our ears, and our self-aware musings on chickens and eggs. We know that somewhere along that transitioning pathway is the sparking of consciousness, but there remains no agreement among our top minds as to what that is, or indeed that the problem is actually a “hard” one, or not. If the HBP has sorted the autocracy AND restored cognitive and systems biology to the programme curriculum, that has to be for the better.

The Turing test tests what?

Almost total admiration for the Cumberbatch biopic of computer pioneer Alan Turning, The Imitation Game, is dominated by its fairly conventional narrative arc: a genius triumphs over adversity and saves the world at considerable cost to himself. Less explicit in much of the commentary on the film is the role of identity in questions of ascribing human qualities, including intelligence. Left begging is the question about the people doing the ascribing: how intelligent are they?

The Turing test inspired by the man himself is focused on identity, a quality too often ignored in the journey of intelligence from apprehending through computing to reflecting and beyond, into consciousness and then self-consciousness and onto the establishment of identity. When a computer has negotiated that journey so successfully as to seem to a human judge to be indistinguishable from a human, it is said to have passed the Turing test. (More than most people will ever want to know about this test can be found here.)

But to what extent is this test more about marketing than about intelligence? What is the Turing test status of a computer that can fool a room full of the ignorant and credulous, but not a room full of poets or computer boffins? Can a less able machine succeed where a smarter machine would not, simply because it was programmed with the specific goal of fooling people into thinking it was human?

Surely if the purpose of the test is to adjudicate on ascribed intelligence, wouldn’t a more appropriate challenge be to pit a computer against a human and tax another computer with spotting the difference?

The Bible is an exercise in guessing for utility

Today’s lead item among the Lively Stuff (below) is another example of humanity’s attempt to post-rationalise holy scripture out of an earlier and more credulous age. In the case of the story from the South China Morning Post, a biologist re-interprets “Let there be light” in the context of the evolution of light-sensitive body parts. The result is a claim that when the Bible is taken figuratively rather than literally, “. . . it not only keeps pace with the hottest science, it precedes or heralds it.”

This raises more questions than it answers, e.g. who does the figuring and within what cultural context, in order to underpin what belief? But it does not prevent the majority of the world’s believers, Judeo-Christians with their Bible, Muslims with their Koran, from carrying on contentedly in a wholly literal apprehension of their holy text. Even then there exist quasi-scientific apologists for holy miracles such as the Great Flood and the parting of the seas, as if combinations of comet, tides and wind can provide rational explanations (but without explaining the coincidence of these phenomena at precisely the hour of the Israelites’ need).

We don’t need such imagination when combining the teachings of neurology and history to understand the context in which scripture was born. The key is that how we knew what was known mattered far less when we didn’t know much. Far more important was how we used that knowledge, and how we behaved. Believe as you like, but mess with The Man and you burn eternally. That got people’s attention.

It interesting to witness how the inexorable stock-piling of information throughout history increased our store of knowledge, all with a commensurate decline in belief systems built on miracles, even those that might now be dressed in scientific garb.

Distraction as an intelligence aid, er . . . hold on . . .

If we type “multi-tasking boosts intelligence” into Google today, we secure 606,000 results in 0.41 seconds. Before looking at the quality of those results, we are easily distracted by speculation as to the role distraction may have played in delaying their arrival by as much as 0.21 seconds. Could the task have been done in half the time if the search engine had not paused to read an email, status check its facebook page, respond to an SMS query on the supply of toilet paper and then browse through a list of ten things it didn’t know about Brad Pitt?

When we take a moment to consider the return on investing time in thinking about the implications of thinking about this, we have another thing to be humble about. Google – and by extension any machine learning process on the nursery slopes of Mount AI – will not be distracted in the pursuit of its objectives. It can focus.

There is yet more worry for those exercising what yesterday’s BAMblog referred to as meat-based intelligence. It was summed up nicely by Nicholas Carr in an article in Wired Magazine all of five years ago, but much referenced since. In considering how “The web shatters focus, rewires brains” he commented that: “ . . . our online habits continue to reverberate in the workings of our brain cells even when we’re not at a computer. We’re exercising the neural circuits devoted to skimming and multitasking while ignoring those used for reading and thinking deeply.”

So back to the Google search. Page one turns up an engaging testimonial to what multi-tasking can do for adaptive intelligence. With practice, we can become better multi-taskers. But the consensus appears to be that we sacrifice something as champions of reflective intelligence if we sacrifice thinking for skimming.

So where is everybody: here, there, or everywhere?

News out of Penn State suggests that the search for alien intelligence is not going well, with the result that our species remains as mystified as the physicist Enrico Fermi was half a century ago when he postulated his famous paradox: if the vasty depths of the universe are so vanishingly large, why should there not be multiple alien intelligences out there, including some so intelligent that we would have heard from them by now?

Some of the more rudimentary explanations to have emerged in the years since seem sensible enough. The aliens are too far away, or just too intelligent to wish to have anything to do with what science fiction author Terry Bisson famously called “meat-based intelligence”. And then there is the furthest outlier argument, which determines that there must in any probability distribution curve be a least likely among the still viable alternatives, and maybe we are it . . . and we are alone. That particular singularity represents a terrifyingly small number, but it’s a bigger number than Penn State got to in counting up known alien intelligences.

Maybe we are still being battered by Maslow’s hammer. Just as we thought of God as a watchmaker when timekeeping was the ascendant technology: so now we talk of the mind as software running on the hardware of the brain. Our paradigm of the here and now in the search for alien intelligence suggests that from here, there must be something “out there”. And maybe all the while the intelligence is indeed there, but it is here too, and also everywhere. Intelligence may be more of a quantum suffusion, and less of a singularity. On which McCartneyesque theme, let’s listen to Paul “squirting air through his meat”, as Bisson would describe it, and then ask if meat-based intelligence isn’t actually pretty cool.

Intelligence: shifting up through the gears

Against the argument that we only use 10-20% of our brains it might be helpful to extend the technology metaphor of the various software programmes that run on it – computation, consciousness, reflection, identity, imagination – by seeing their operation in terms of the operation of gears. Thus, any manifestation of intellect that is stuck in first gear will only go so fast, however many revolutions per second it might generate. More efficiency and greater speed come by gearing up, and it will always be pretty difficult to imagine an engine that is screaming at the limits of a low gear, going on to win a big race in top gear.

Complicating matters more than a little is that those manifestations of intellect don’t gear up in the same way at the same rate, simultaneously. Within just a few generations of conceiving that a computer might exhibit such brute computational power that it would surpass a human chess champion, Deep Blue triumphed. But while Gary Kasparov, the defeated champion, noted elements of creativity in the computer’s play, at no point then or since was it noted that the computer displayed consciousness or even any senses of humour, irony, absurdity or identity.

An intriguing sidebar to the Deep Blue story occurred in Game One of the series when the computer made a move that was later determined by its creators to have been the result of a software bug. Imputing to the computer a degree of creative genius beyond his comprehension, Kasparov pocketed the win but was so unnerved that he failed to win another game. Perhaps one lesson from this anecdote is that human foibles can allow the operation of intellect to gear down as well as up.

Kurzweil as sire to his father

An extended interview with futurist Ray Kurzweil, published today as part of the Financial Times’ “Breakfast with the FT” feature series, develops an intriguing riff on his well-known dedication to vanquishing disease and facilitating a future in which people can live, if not forever, then longer than they do now. It appears that a very personal prototype for his deliberations in this area is his own father, who died of heart disease almost half a century ago when Kurzweil was still a young man. He speaks of his ambition to create an avatar of his father built out of all the memories, information and artefacts he retains all these decades later, rendering a Kurzweil Senior “more like my father was than he would have been, had he lived.”

There should be at least two distinct and serious pauses for reflection here. The first is the titanesque reputation of a thinker who has done so much in becoming by general reckoning the world’s “father of AI”. Ray Kurzweil talks persuasively of the coming revolutions in biotechnology and molecular nanotechnology that will facilitate the reprogramming and rebuilding of the entire human animal.

And the second is the possibly fathomless depths of the power of filial devotion.

One of the less reflected upon areas of cognition is the extent to which what we know, and how we know it, is impossible to qualify. To the extent that anyone who has been substantively lost to the world has become increasingly disengaged from all that made him who he was, what world would need recreating to restore him?

As the machines get smarter, might we get stupider?

As we contemplate the approach speed of The Singularity, would a comparator see the human intelligence factor as a constant against which the machine intelligence draws near with inexorable and increasing rapidity, passing finally in a cloud of dust with cartoon sound effects, vanishing like the Roadrunner into the cosmic distance? Or might the human component itself speed up or slow down: in the former instance through finding a way to go up a gear; or in the latter, by ceding mental capability on the way to a similarly inexorable dumbing down?

This is a somewhat sobering thought: not just that machines become more intelligent, artificially or otherwise, but that human cognitive development might in some important ways slow down or go into reverse. Consider a few of the ways in which, just in the space of a generation, we have come more and more to rely upon machines to do what our near ancestors might have taken for granted: doing a sum, working through a recipe, or finding our way across town.

As we push the button on the calculator, the Search key, or the Satnav, we might wonder at the mental processes we forsake so machines can do the heavy lifting. Those of us fortunate to have done mental arithmetic exercises in school may no longer be conscious of the calculation needed to master 9 x 7: the number 63 just “pops” into mind. But for those who have to work at it, it may still be possible to look up, squint, and work our way through the sequence 9, 18, 27 and so on up to 63; or perhaps go the roundabout calculation route: (10 x 7 = 70) – 7 = 63.