Collaboration is the new competitive advantage

For years if not for decades, the buzzword in the worlds of innovation and enterprise, echoing through the lecture theatres of business schools and splattered across white boards in investment fund and marketing companies across the globe, has been disruption. Given the evolving world of exponential growth in computing power, artificial intelligence and deep learning, it may well be that the status of has-been is what this awful word “disruption” will soon occupy. Of course it will still exist, but as a by-product of unprecedented advances rather than a reverently regarded target of some small-minded zero-sum game.

Consider an article that appeared a few days ago on the Forbes website, asking rhetorically if the world-shifting changes that humanity requires, and to a large extent can reasonably expect to see, are likely to be achieved with a mind-set that is calibrated to “disrupting” existing markets. For each of the described challenges – climate change, energy storage, chronic diseases like cancer, and the vast horizons of potential in genomics, biosciences, and immunotherapies – collaboration rather than competition is emerging as the default setting for humanity’s operating system.

Mental models based upon cooperative networks will begin replacing the competitive hierarchies that only just managed to keep the wheels on the capitalist engine as it clattered at quickening speeds through our Industrial age, lurching from financial crisis to military conflict and back again, enshrining inequalities as it went and promoting degradations of Earth’s eco-system along the way. And why will things change?

Of course it would be nice to think that our species is at last articulating a more sustaining moral sense, but it won’t be that. It will simply be that the explosion of data, insatiable demands upon our attention, and the febrile anxieties of dealing with the bright white noise of modern life will render our individual primate brains incapable of disrupting anything remarkable to anything like useful effect.

The Forbes article concludes with admiration for what the digital age has been able to achieve, at least as long as the efficacy of Moore’s Law endured. It is soon to be superseded, however, by the emerging powers of quantum and Neuromorphic computing, with the consequent explosion of processing efficiency that will take our collective capabilities for learning and for thinking far beyond the imaginings of our ancient grassland ancestors.

Working together we will dream bigger, and disrupt far less than we create.

Consciousness is not as hard as consensus

Any review of the current writings on consciousness turns up the idea, sooner or later, that it is no longer the hard problem that philosopher David Chalmers labelled it two decades ago. There has been a phenomenal amount of work done on it since, much of it by people who seem pretty clear, in fact, on what it means to them. What is clearly much harder is getting them to agree with one another.

One of the greater controversies of this year was occasioned by psychologist Robert Epstein, declaring in an article in aeon magazine entitled The Empty Brain that brains do not in fact “process” information, and the metaphor of brain as computer is bunk. He starts with an interesting premise, reviewing how throughout history human intelligence has been described in terms of the prevailing belief system or technology of the day.

He starts with God’s alleged infusion of spirit into human clay and progresses through the “humours” implicit in hydraulic engineering to the mechanisms of early clocks and gears to the revolutions in chemistry and electricity that finally gave way to the computer age. And now we talk of “downloading” brains? Epstein isn’t having it.

Within days of his article’s appearance came a ferocious blowback, exemplified by the self-confessed “rage” of software developer and neurobiologist Sergio Graziosi’s response, tellingly entitled “Robert Epstein’s Empty Essay”. The earlier failures of worldly metaphor should not entail that computer metaphors are similarly wrong-headed, and Graziosi provides a detailed and comprehensive review of the very real similarities, his anger only sometimes edging through. He also provides some useful links to other responses to Epstein’s piece, for people interested in sharing his rage.

For all this, the inherent weakness in metaphor remains: what is like, illuminates; but the dissimilarities obscure. The brain’s representations of the external world are not designed for accuracy, but are evolved for their hosts’ survival. Making this very point, in a more measured contribution to the debate, is neuroscientist Michael Graziano writing in The Atlantic. “A New Theory Explains How Consciousness Evolved”, tackles the not-so-hard problem from the perspective of evolutionary biology.

He describes how his Attention Schema Theory would be evolution’s answer to the surfeit of information flowing into the brain – too much to be, he says, “processed”. Perhaps, in the context of the Epstein/Graziosi dispute, he might better have said “assimilated”. Otherwise, full marks and thanks to Graziano.

Creationism not just a what problem: but how

Controversy arising from the recent opening of the “Ark Encounter” in Kentucky (promising something “Bigger Than Imagination”) has excited dismay among the international scientific community, echoing everywhere from New Scientist to National Public Radio. These ancient creation myths have so firmly passed through the evidence mincer into the file marked “Of Anthropological Interest Only” as to be beyond the need for refutation here, as has the pin-head dance of prevarication over what is to be taken literally, and what metaphorically: after all, who has the authority to determine which?

Two questions of greater significance involve the moral and the epistemological considerations of the Ark fable. The first is particularly resonant as it touches upon the upper hand that religion feels it holds over atheism: where would humanity go for its ethics if it didn’t have religion to show it the way? On the evidence of this particular fable, the moral of the story appears to be that you must die by drowning if you are not privileged to be Noah or a member of his family, or one of only two animals from each species which can leg it to the boat before the waters close in.

The second question may be more significant, at least cognitively: how do we know what we know? How did we come by that knowledge, then question it, revise it, and enhance it? Here is where humanity is so badly let down by its putatively holy books, in which no explanation is too risible for unquestioning belief to be compelled upon pain of eternal damnation, whatever evidence may emerge for more plausible explanations.

It is as if some ancient map had been handed down to posterity as an unerring guide to the world in all its flat majesty, promising that between the verdant coastline and the perils of the world’s edge, over which the oceans spill off and out into space, lie vast depths where there be dragons bigger than imagination. Onward marches history and the science of cartography, and slowly we come by better maps that reveal the world more closely adhering to the reality in which no traveller need ever fear slipping over the edge of the world, or being devoured by non-existent dragons.

But any insistence upon adhering to a belief in the old map will make landfall a much less likely possibility too, no matter how big the boat.

Humanity on the cusp of enhancement revolution

An article from the Pew Research Center takes a long look at the subject of Human Enhancement, reviewing the “scientific and ethical dimensions of striving for perfection.” The theme of transhumanism is getting a lot of media attention these days, and it was no surprise when Nautilus weighed in, capturing the ambivalence over aging in one issue three months ago when explaining “why aging isn’t inevitable”, while another article in the same issue argued that it is physics, not biology, that makes aging inevitable because “nanoscale thermal physics guarantees our decline, no matter how many diseases we cure.” Hmmm . . .

Taking another perspective, a third Nautilus feature speculated on the old “forget adding years to life, focus on adding life to years” chestnut, asking if the concept of “morbidity compression” might mean that 90 will “become the new 60.”

On the day that this year’s Olympics kick off in Brazil, we can conclude our round-up of key articles with a fascinating contribution to the enhancement debate by Peter Diamandis of SingularityHUB, speculating on what Olympian competition might be like in the future “when enhancement is the norm.” And it is this last headline link that brings into sharp focus the major point on which most media commentaries on enhancement agree: the key word is “norm”.

Enhancement is in the natural order of things and never really manifests itself as a choice so long as it remains evolutionary: that is, moving so slowly that nobody much notices it. When change explodes with such momentum that nobody can fail to notice it, it begins to settle into being a new normal. And as Diamandis concludes his extended thought experiment on what is happening with a quick spin through synthetic biology, genomics, robotics, prosthetics, brain-computer-interfaces, augmented reality and artificial intelligence, he concludes almost plaintively:

“We’ve (kind of) already started . . .”

As indeed we have. In today’s Olympian moment we can note that whether or not human enhancement was part of “God’s plan” (as per the weakest section of the Pew article) the idea of Faster, Higher, Stronger certainly figured in the plans of Baron de Coubertin. Now, can this also mean Smarter? Left hanging in the otherwise excellent Pew piece is the question if a “better brain” might enable a better mind or, at least, a higher capacity for clearer and more creative thinking. Can we move our thinking up a gear?