Tag Archives: Religion

New Year offers promise for foxes and lunatics

As 2017 gears up for its short sprint to the inauguration of America’s next president, the mature media are recoiling at the prospects of the people whom the president-elect is gathering around him to help define the tone and agenda of his presidency. Whether we look at Energy, the Environment, or Education – and that’s just the letter E – the impression is not so much that American political culture will be driven by incompetents, as that the foxes and lunatics whose career missions have been to spread mayhem in specific areas have been put in charge of the very henhouses and asylums that the rest of us have been trying to protect from their rapacities.

A common theme in the media commentary is that this amounts to a war on science. It is certainly this, but it is more: it is a war on critical thinking, on expertise and, critically, on empathy. What may prove most corrosive is the impact upon the key quality that separates human intelligence from its emerging machine correlate. It is empathy that emerged above so many other qualities when the cognitive explosion of some 60,000 years ago set the erstwhile Monkey Mind on its journey to the new and far more expansive cultural horizons of Homo sapiens.

Thinking in another person’s headspace is the cognitive equivalent of walking in another man’s shoes. It requires a theory of mind that allows for another creature’s possession of one, and an active consciousness that an evolving relationship with that other mind will require either conquest or collaboration. Academics can argue over the tactics and indeed over the practicality of “arguing for victory” or they can, in understanding the validity of empathy, agree with philosopher Daniel Dennett as he proposes some “rules for criticising with kindness”.

Amidst all the predictions for 2017 as to how human and artificial intelligence will evolve, we may hear more over the coming months about the relationship between intelligence (of any kind) and consciousness. To what extent will a Super Artificial Intelligence require that it be conscious?

And will we ever get to a point of looking back on 2016 and saying:

Astrology? Tribalism? Religion? Capitalism? Trump? What ever were we thinking? Perhaps in empathising with those who carried us through the infancy of our species, we will allow that at least for some of these curiosities, it was a stage we had to go through.

Consciousness is not as hard as consensus

Any review of the current writings on consciousness turns up the idea, sooner or later, that it is no longer the hard problem that philosopher David Chalmers labelled it two decades ago. There has been a phenomenal amount of work done on it since, much of it by people who seem pretty clear, in fact, on what it means to them. What is clearly much harder is getting them to agree with one another.

One of the greater controversies of this year was occasioned by psychologist Robert Epstein, declaring in an article in aeon magazine entitled The Empty Brain that brains do not in fact “process” information, and the metaphor of brain as computer is bunk. He starts with an interesting premise, reviewing how throughout history human intelligence has been described in terms of the prevailing belief system or technology of the day.

He starts with God’s alleged infusion of spirit into human clay and progresses through the “humours” implicit in hydraulic engineering to the mechanisms of early clocks and gears to the revolutions in chemistry and electricity that finally gave way to the computer age. And now we talk of “downloading” brains? Epstein isn’t having it.

Within days of his article’s appearance came a ferocious blowback, exemplified by the self-confessed “rage” of software developer and neurobiologist Sergio Graziosi’s response, tellingly entitled “Robert Epstein’s Empty Essay”. The earlier failures of worldly metaphor should not entail that computer metaphors are similarly wrong-headed, and Graziosi provides a detailed and comprehensive review of the very real similarities, his anger only sometimes edging through. He also provides some useful links to other responses to Epstein’s piece, for people interested in sharing his rage.

For all this, the inherent weakness in metaphor remains: what is like, illuminates; but the dissimilarities obscure. The brain’s representations of the external world are not designed for accuracy, but are evolved for their hosts’ survival. Making this very point, in a more measured contribution to the debate, is neuroscientist Michael Graziano writing in The Atlantic. “A New Theory Explains How Consciousness Evolved”, tackles the not-so-hard problem from the perspective of evolutionary biology.

He describes how his Attention Schema Theory would be evolution’s answer to the surfeit of information flowing into the brain – too much to be, he says, “processed”. Perhaps, in the context of the Epstein/Graziosi dispute, he might better have said “assimilated”. Otherwise, full marks and thanks to Graziano.

Creationism not just a what problem: but how

Controversy arising from the recent opening of the “Ark Encounter” in Kentucky (promising something “Bigger Than Imagination”) has excited dismay among the international scientific community, echoing everywhere from New Scientist to National Public Radio. These ancient creation myths have so firmly passed through the evidence mincer into the file marked “Of Anthropological Interest Only” as to be beyond the need for refutation here, as has the pin-head dance of prevarication over what is to be taken literally, and what metaphorically: after all, who has the authority to determine which?

Two questions of greater significance involve the moral and the epistemological considerations of the Ark fable. The first is particularly resonant as it touches upon the upper hand that religion feels it holds over atheism: where would humanity go for its ethics if it didn’t have religion to show it the way? On the evidence of this particular fable, the moral of the story appears to be that you must die by drowning if you are not privileged to be Noah or a member of his family, or one of only two animals from each species which can leg it to the boat before the waters close in.

The second question may be more significant, at least cognitively: how do we know what we know? How did we come by that knowledge, then question it, revise it, and enhance it? Here is where humanity is so badly let down by its putatively holy books, in which no explanation is too risible for unquestioning belief to be compelled upon pain of eternal damnation, whatever evidence may emerge for more plausible explanations.

It is as if some ancient map had been handed down to posterity as an unerring guide to the world in all its flat majesty, promising that between the verdant coastline and the perils of the world’s edge, over which the oceans spill off and out into space, lie vast depths where there be dragons bigger than imagination. Onward marches history and the science of cartography, and slowly we come by better maps that reveal the world more closely adhering to the reality in which no traveller need ever fear slipping over the edge of the world, or being devoured by non-existent dragons.

But any insistence upon adhering to a belief in the old map will make landfall a much less likely possibility too, no matter how big the boat.

Humanity on the cusp of enhancement revolution

An article from the Pew Research Center takes a long look at the subject of Human Enhancement, reviewing the “scientific and ethical dimensions of striving for perfection.” The theme of transhumanism is getting a lot of media attention these days, and it was no surprise when Nautilus weighed in, capturing the ambivalence over aging in one issue three months ago when explaining “why aging isn’t inevitable”, while another article in the same issue argued that it is physics, not biology, that makes aging inevitable because “nanoscale thermal physics guarantees our decline, no matter how many diseases we cure.” Hmmm . . .

Taking another perspective, a third Nautilus feature speculated on the old “forget adding years to life, focus on adding life to years” chestnut, asking if the concept of “morbidity compression” might mean that 90 will “become the new 60.”

On the day that this year’s Olympics kick off in Brazil, we can conclude our round-up of key articles with a fascinating contribution to the enhancement debate by Peter Diamandis of SingularityHUB, speculating on what Olympian competition might be like in the future “when enhancement is the norm.” And it is this last headline link that brings into sharp focus the major point on which most media commentaries on enhancement agree: the key word is “norm”.

Enhancement is in the natural order of things and never really manifests itself as a choice so long as it remains evolutionary: that is, moving so slowly that nobody much notices it. When change explodes with such momentum that nobody can fail to notice it, it begins to settle into being a new normal. And as Diamandis concludes his extended thought experiment on what is happening with a quick spin through synthetic biology, genomics, robotics, prosthetics, brain-computer-interfaces, augmented reality and artificial intelligence, he concludes almost plaintively:

“We’ve (kind of) already started . . .”

As indeed we have. In today’s Olympian moment we can note that whether or not human enhancement was part of “God’s plan” (as per the weakest section of the Pew article) the idea of Faster, Higher, Stronger certainly figured in the plans of Baron de Coubertin. Now, can this also mean Smarter? Left hanging in the otherwise excellent Pew piece is the question if a “better brain” might enable a better mind or, at least, a higher capacity for clearer and more creative thinking. Can we move our thinking up a gear?

Religion is impeding our cognitive development

An article in today’s Guardian wonders if, with the accession of the UK’s new Prime Minister, Theresa May, the Conservative Party might be “doing God” again. The writer ponders on the sort of God this may be, suggesting that recent shifts in government policy may reflect an evolution in the culture beyond getting fussed about people’s sexual behaviour, focusing more on issues of social justice. The article does not comment on the possibility that some, or possibly all, of these issues might articulate an effectively moral direction without any assistance from scripture.

Humanity is maturing, leaving the Bible behind with its atavistic obsession with controlling promiscuity (“Is God a silverback?”, indeed). The quiet determinations of science continue to reveal wonders in creation beyond the imaginings of our comparatively ignorant ancestors of two millennia ago, although there is no shortage of efforts to reverse engineer those imaginings for the amazement of the gullible. Witness the consternation of America’s Bill Nye (the “Science Guy”) when he recently visited a recreation of Noah’s Ark in Kentucky. It seems that the price of progress is still vigilance.

Back in the evolving world, we are about to witness a quantum enhancement in human intelligence that may exceed in its impact what the evolution of vision appears to have accomplished in the Cambrian explosion of 545 million years ago, according to this Financial Times feature on a stunning new exhibition at London’s Natural History Museum.

It is hard to see what formally organised religion might contribute to all of this going forward. It will not be enough to maintain a charade that a focus on good works, social justice and community cohesion is sufficient when any of those activities could as easily be pursued for their own sakes. What is more troubling is the potentially retardant effect of embracing the cognitive dissonance that comes with cherry-picking what is estimable from holy texts while accommodating in the darker recesses of our minds the egregious bits of a belief system that, to put it mildly, has outlasted its credibility.

What sort of brain do we wish to bequeath to our generations to come? If there is to be a new Cambrian-style explosion in what the human brain can do, it will not be aided by clinging on to the intellectually untenable while denying the means by which we may grasp new ways of knowing, and thinking, and becoming.

Time has ticked a heaven round the stars

In the green fuse of the young Dylan Thomas’ imagination we find the perfect description of how humanity’s knowledge of the universe has proceeded from mute awe to a better but still imperfectly informed wonder. All it took was time – and science – and our notion of heaven was transformed from the clumsy metaphor of celestial theme park to something far richer, more vast and various, and beautiful beyond comprehension. And most wondrous of all, we humans are not only actively immersed within this heaven, albeit on the nanoscale, but we are conscious of being so, and of being so in the here and now.

It is easy for us to see today how religions ignite. While all other species seemed happy to proceed from meal to meal without any need for meaning along the way, Homo sapiens has sought explanations, patterns, and a sense of its place above and beyond the brutish rants and ruttings of everyone’s daily lives. Given what we knew about what we flattered ourselves to suppose was the universe two thousand years ago, it is not surprising that the revelations and rules comprising the Pentateuch, Bible and Koran emerged as the defining Operating Manual for Life on Earth. And in understanding more now about what we didn’t know then, and given our inbred venalities and credulity, it is even less surprising that these religions caught on.

With what has happened over the last two millennia – and in science and technology, what has transpired particularly over the last two centuries – it would be stranger if anyone were now to propose one of the “great faiths” as a credible belief system for today’s world. (Although, as a reminder of the limpet-like tenacity of human credulity, it is less than two centuries since the appearance of the “Book” of Mormon.) But on balance, our cognitive horizons continue to expand and, with them, our aspirations for new frontiers of intelligence and wonder in an enlarging universe. We may not wonder at the answers conceived by religion in the infancy of our species but, in our progress beyond I Corinthians 13:11 to the irony of John 8:32, we open up new vistas of potential in the flowering of human intelligence. The truth can indeed set us free, although perhaps not in the manner that the Jesus of scripture intended.

Taking the temperature of atheism

An interesting few weeks of Atheism in the News suggests that this is a good time to be taking its temperature, and seeing what implications there are for the progress of human knowledge that things now are where they are. And where are they? Three stories over the past seven days suggest that renowned atheist Christopher Hitchens may have had a change-of-mind on his deathbed; comedian contrarians Bill Maher and Michael Moore are contemplating a documentary film to be called “The Kings of Atheism” featuring well-known comedians on a stand-up tour of the American Bible Belt; and a $2.2 million donation will endow the USA’s first academic chair for the “study of atheism, humanism and secular ethics” at the University of Miami.

The first story is manipulative nonsense. Anyone who knew Hitchens or read him on religion will appreciate that the wit he exercised in his study of the science of belief will survive him for any realistic definition of eternity. He consolidated “what oft was thought, but ne’er so well expressed” and with such power as no tawdry revisionism can undermine. If we accept a central thesis of his thinking, that religious devotion inhibits the critical faculties of our species and puts at risk its cognitive evolution, there is a lot of work still to be done. That work cannot include indulging wishful thinking sustained by confirmation bias, or the spectacle of a man advertising himself as Hitchens’ friend making a sacrifice of that friendship on the altar of his greed.

The film idea? It must suggest some evolution of our species that within just a few centuries of heretics being flayed and burned for blasphemy, such a project might be announced on national television. But we might still think: good luck with that.

Most engaging is the question of the culture of the newly endowed chair. Will its terms of atheistic reference be reactive and obsessed with the old perception of atheists as humourless human husks with neither morals nor magic? Or will something emerge of the humanity that might have evolved if science had taken hold sooner, superseding religion with its hectoring certainty that, in Hitchens’ memorable words, “if you will abandon your critical faculties, a world of idiotic bliss can be yours.”

Babel, hubris, and humanism unchained

Scarcely a week goes by without the BAM content feeds getting a ping or two from the Singularity Symposium, with its key focusing points being Artificial Intelligence and Transhumanism. The latter is defined as “the belief that technology can allow us to improve, enhance and overcome the limits of our biology”. For an inspiring two minutes by the typically electric “performance philosopher” Jason Silva on the no-limits frontiers of transhumanism, try this Shot of Awe. Continue reading Babel, hubris, and humanism unchained

Paperclips or thumbtacks: the perils of SAI changing its mind

Paranoia and the blogosphere go together like a horse and carriage, no more so than on the subjects of nasty government or the dangers of AI. Aspiring eye-swivellers can replace that “or” with “and”, and while away many a happy morning reading up on Jade Helm. Saner people will delight in happening across a blogger such as the Swedish Olle Häggström, whose most recent post offers up a thoughtful commentary on Nick Bostrom’s Super Intelligence, providing some useful links along the way, as well as an intriguing footnote on “goal-content integrity”.

More than just a thought experiment, the challenges of AI deciding on its journey to Super Artificial Intelligence that it might change its mind is the cornerstone of anxiety over humanity’s future. Having said that, as thought experiments go, it is an existential doozy. Playing with it can set out, as Bostrom does, with speculating on an AI designed to optimise the production of paperclips, filling the universe with paperclips converted from anything that once contained carbon . . . such as people. But then instead, might that SAI conceivably change its mind and set about producing thumbtacks? Possibly not – not if it had been programmed to produce paper clips.

Goal-content integrity can get trickier when the optimisation target is conceptually fuzzier than a paperclip: for example, a human soul. Imagine our SAI setting out to maximise the number of human beings who get into Heaven, only to discover – if only for the purposes of this thought experiment – that Heaven was always itself a human construct created to optimise our good behaviour. Might the SAI have an epiphany? “Ah,” it might think: “if Heaven were just a means to an end, why not optimise the conditions that encourage the best behaviour in the most people in the first place?”

Robots may never see Jesus in a bun

Neural networks – human constructions of computer code devised to act and interact similarly to neurons in the human brain – are encouraging bemusement, if not downright befuddlement, in their human creators. As this fascinating article in Nautilus, entitled “Artificial Intelligence is Already Weirdly Inhuman” makes clear, or at least clearish, you can set the algorithms running without being clear where they will end up. What may be even weirder, in the spirit of the title, is that where they end up may make sense without any human being the wiser as to how they got there.

Mirroring the human brain’s pattern of perceive, then interpret, then produce an output: the algorithm devised to distinguish a cheetah from a vehicle will develop to a point where it is right more often than a human, being able to distinguish beast from bus when all the human sees is a smear of pixels. What is even harder for the human is to reverse engineer the process by which the ever more complex, machine-produced lines of code can take the computer’s interpretative capabilities beyond that of the human. Echoing Star Trek: “It’s intelligence, Jim, but not as we know it.”

We should bear in mind that by the time that machine intelligence is determined to match or exceed human intelligence, the problem of apples and pears might render the comparison meaningless. And it might fall to the machines to tell us, rather than us tell them. “Listen, people: you will not only see the image of Jesus in a hot cross bun, when there are no consensual criteria for what the man even looked like, but you then conclude that he is talking to you! What sort of intelligence is that?”