Tag Archives: Thinking

Open letter to PM May: Think to the future

Are you certain that you have a coherent vision for the direction of our country, and a steady hand on the tiller as we plough forward? Anyone watching the news over the past year has experienced growing dismay as key problems spin rapidly beyond the control not only of the beleaguered citizenry but also of the stewards of society whose remit for addressing society’s problems has evolved over centuries.

As a result of two triumphs of populist will over reasoned circumspection, two of the world’s most significant politicians – each one possessing a uniquely problematic mandate from their electorates – met recently in Washington DC to discuss a platform for cooperation in the future in general and, in particular, to establish the foundation for a trade deal.

One distinct difference between these politicians is that one is favoured by her upbringing within a culture that has learned, and is still learning, the enduring merits of exercising soft power over hard. The other politician is an unashamed practitioner of the coarse brutalities and darker arts of hard power.

In the course of this meeting a State Visit invitation was extended that was neither demanded of the circumstances nor consistent with long-established precedent. What has been broadly identified as a collusive and appeasing act had not even the fig leaf of pathetic and transient glistering gain. Within a week of the invitation being extended, almost two million signatures were secured here on a petition decrying that invitation, and prompted this reply from your government’s website:

“HM Government believes the President of the United States should be extended the full courtesy of a State Visit . . . HM Government recognises the strong views expressed by the many signatories of this petition, but does not support this petition . . . This invitation reflects the importance of the relationship between the United States of America and the United Kingdom.”

The “strong views” being expressed are more than emetic eruptions of dismay. They arise from millennia of reflections on the constitution of effective relationships, and what defines the “importance” of sustaining them. They reflect the lessons absorbed by people still living of more recent collisions of collusion and principle. Within a mere lifetime past we have witnessed the price to be paid for nurturing the nursery steps of autocratic egomaniacs simply because we think we can do business with them.

In a world in which “the best lack all conviction while the worst are full of passionate intensity”, have you reflected on the well of inspiration to be derived from a thousand years of British history? Has enough not transpired that we can sense posterity’s judgments on rulers who sacrifice hard-won ideals and long-term prosperity for unseemly grasping after the petty inducements of what glitters today?

At a time when you are on a determined course to re-define the concept of national self-possession, you might reevaluate the prospects for Britain in selling the national soul not through adherence to a grander plan or higher ideal, but to headlong slavering after association with a regime as dystopian, cognitively chaotic and mendacious as Donald Trump’s.

New Year offers promise for foxes and lunatics

As 2017 gears up for its short sprint to the inauguration of America’s next president, the mature media are recoiling at the prospects of the people whom the president-elect is gathering around him to help define the tone and agenda of his presidency. Whether we look at Energy, the Environment, or Education – and that’s just the letter E – the impression is not so much that American political culture will be driven by incompetents, as that the foxes and lunatics whose career missions have been to spread mayhem in specific areas have been put in charge of the very henhouses and asylums that the rest of us have been trying to protect from their rapacities.

A common theme in the media commentary is that this amounts to a war on science. It is certainly this, but it is more: it is a war on critical thinking, on expertise and, critically, on empathy. What may prove most corrosive is the impact upon the key quality that separates human intelligence from its emerging machine correlate. It is empathy that emerged above so many other qualities when the cognitive explosion of some 60,000 years ago set the erstwhile Monkey Mind on its journey to the new and far more expansive cultural horizons of Homo sapiens.

Thinking in another person’s headspace is the cognitive equivalent of walking in another man’s shoes. It requires a theory of mind that allows for another creature’s possession of one, and an active consciousness that an evolving relationship with that other mind will require either conquest or collaboration. Academics can argue over the tactics and indeed over the practicality of “arguing for victory” or they can, in understanding the validity of empathy, agree with philosopher Daniel Dennett as he proposes some “rules for criticising with kindness”.

Amidst all the predictions for 2017 as to how human and artificial intelligence will evolve, we may hear more over the coming months about the relationship between intelligence (of any kind) and consciousness. To what extent will a Super Artificial Intelligence require that it be conscious?

And will we ever get to a point of looking back on 2016 and saying:

Astrology? Tribalism? Religion? Capitalism? Trump? What ever were we thinking? Perhaps in empathising with those who carried us through the infancy of our species, we will allow that at least for some of these curiosities, it was a stage we had to go through.

Intelligence does not grow in a petri dish

As there are neither agreed rules nor a generally accepted definition of intelligence, nor is there a consensus on what consequences of human or machine behaviour betokens intelligence that is either natural or artificial, it will be difficult to measure the surpassing of any point of singularity when machine intelligence matches and exceeds our own.

It may well prove to be the case that, when we think we have got there, we will have supreme exemplars on both sides of the bio/non-biological intelligence divide asking us if it any longer matters. And as our species approaches the moment of truth that may never obviously arrive, there will be a growing chorus of voices worrying if a bigger question than the definition of intelligence is the definition of the good human, when so much of what we might see as intelligence in its natural state is perverted in the course of action by the festering agency of the seven deadly sins, animated by fear and enabled by ignorance.

Given the wide range of environments within which intelligence can reveal itself, and the vast spectrum of actions and behaviours that emerge within those environments, it may be the very definition of the fool’s errand to attempt an anywhere anytime definition of intelligence itself. We can learn only so much by laboratory-based comparisons of brains and computers, for example, balancing physiological correlations in the one with mechanistic causations in the other.

Glimmerings of clarity emerge only when one intelligent agent is pitched against another in a task-oriented setting, the victory of either one being equated with some sense of intelligence superiority when all that has happened is that an explicit task orientation is better addressed when the parameters of the task can be articulated.

What appears to distinguish human intelligence in the evolutionary sense is the capability to adapt not only in the face of threats and existential fear, but in anticipation of imagined projections of all manner of dangers and terrors. We hone our intelligence in facing down multiple threats; we achieve wisdom by facing down the fear that comes with being human.

Fear is not innate to the machine but it is to us, as Franklin D Roosevelt understood. However machines progress to any singularity, humanity’s best bet lies in understanding how the conquering of fear will enhance our intelligence and our adaptive capabilities to evolve through the singularity, and beyond.

Ha ha bloody ha, AI is getting into scary

A feature on Motherboard (and available from quite a few sites on this particularly frightening day in the calendar) informs readers that “MIT is teaching AI to Scare Us”. Well that’s just great. Anyone insufficiently nervous anyway about the potential perils of AI itself, or not already rendered catatonic in anxiety over the conniptions of the American election, can go onto the specially confected Nightmare Machine website and consult a specially prepared timeline that advances from the Celtic stirrings of Hallowe’en two millennia ago to this very year in which AI-boosted “hell itself breathes out contagion to the world.”

The highlight – or murky darkfest – feature of the site is the interactive gallery of faces and places, concocted and continually refined by algorithms seeking to define the essence of scary. So much of what we sense about horror is rather like our sense of what it is that makes humour funny: it is less induced from core principles but is rather deduced from whatever succeeds in eliciting the scream of terror or laughter. It cannot be a surprise, therefore, that the Nightmare Machine mission is proving perfect for machine learning to get its artificial fangs into. Website visitors rank the images for scariness and, the theory goes, the images get scarier.

Another school of thought, reflected in articles like this piece in Salon on the creepy clown phenomenon, sees the fright not so much in what others find frightening as in what serves as a projection of our own internal terrors. The clowns and gargoyles that stalk the political landscape are to a large extent projections of ourselves: of our own deepest fears for the more empathetic among us, or as simple avenging avatars for the morally bereft or culturally dispossessed.

When AI moves beyond its current picture recognition capabilities into deeper areas of understanding our own inner fears and darkest thoughts, the ultimate fright will no longer lie in some collection of pixels. It will seep from the knowing look you get from your android doppelganger — to all intents and purposes you to the very life — as your watching friend asks, “Which you is actually you?” Your friend doesn’t know, but you know, and of course it knows . . . and it knows that you know that it knows . . .

Rosetta, Peres and Trump: a study in contrasts

On the day that the Rosetta mission reached a deliberate and lonely climax on a distant comet, and Shimon Peres was buried in Israel, we saw the peaking of two great narrative arcs that define so much of the glory of what it means to be human. The first represents another great triumph of science with a research journey further into space than our species has ever ventured while observing in such detail as it flew. The legacy is a mountain of data for scientists to assimilate for decades to come following the last pulse of intelligence from the expired satellite itself.

The second is up there with the Mandela story: Shimon Peres, international statesman and Israeli icon, a man of peace who can bring the planet’s greatest and best to attend his funeral. But like Mandela before him, Peres shines especially as a man whose odyssey took him through violence to an understanding that there is more security and happiness in peace than there is in war. Tough getting there, tough staying there, but worth the effort – and inspiring to everyone who believes that as monkeys became human, so humans may one day become something better yet.

Rosetta and Peres, science and statesmanship, collaborate on this day to remind humanity of the benefits of evolutionary progress.

Agnotologist Donald Trump stands apart from both. He too has become an icon: not of progress and hope but of the wages of ignorance, the triumphs of fear and bias, the submission of means to ends and the subversion of truth to the primacy of the pre-ordained outcome. While he himself represents no triumph of evolution, he at least is prompting reflections on how the human mind works (or doesn’t), particularly in its possible impact on other minds.

Another Donald once bemused the world with his musings on “known knowns” – the things we know that we know. He distinguished them from things we know we don’t know, and the unknown things that remain unknown to us. In ignoring the fourth permutation – the unknown knowns — The Donald that was Rumsfeld ignored the very patron saint of ignorance.

So many things were known to Shimon Peres, and are known to contemporary science that will forever be unknown to Donald Trump. His universe of ignorance remains as bleak and alien and dead as the distant comet with which humanity has at least established a first connection.

Algorithms cannot really know you if you don’t

Count the number of times you notice stories about how the various engines of AI – the algorithms, the machine learning software burrowing ever deeper into the foibles of human behaviour – are getting “to know you better than you know yourself.” What started as variations on “You liked pepperoni pizza, here’s some more” evolved quickly into “People into pizza also love our chocolate ice cream” and on to special deals for discerning consumers of jazz, discounted National Trust membership, and deals on car insurance.

Emboldened as the human animal always is by early success, it was bound to be a small leap for the algo authors to progress beyond bemusing purchasing correlations to more immodest claims. Hence the boast of the data scientist cited in the online journal Real Life, claiming that the consumer is now being captured “as a location in taste space”.

Advocates for algorithms will smile with the rest of us at the anecdote about how the writer’s single imprudent purchase of a graphic novel inspired Amazon into morphing from the whimsy of a one-night stand into the sweating nightmarish stalker from hell; of course, they will claim that algorithms will only get better in inferring desires from behaviours, however seemingly complex.

The writer makes a very good case for doubting this, however, going into some detail on how the various dark promptings of his delight in Soviet cinema of the 1970s are unlikely to excite an algorithmic odyssey to the comedic treatments of pathological sadness in the original Bob Newhart Show.

And yet: the witty sadsack being as likely to emerge in Manhattan as in Moscow, it is not inconceivable that algorithms might evolve to a point of sifting through the frilly flotsams and whimsical whatevers of daily life in multiple dimensions of time and space, to home in on the essentially miserable git who is Everyman. But this is to assume some consistency of purpose to miserable gitness (indeed any manifestations of human behaviour), reckoning that there is no git so miserable that he ceases to know his own mind. And here, Aeon Magazine weighs in to good effect.

There are so many layerings of self and sub-self that implicit bias runs amok even when we run our self-perceptions through the interpretive filters of constantly morphing wishful thinking. So know yourself? It may be that only The Shadow Knows.

Not being a number does not make you a free man

Having listened last week to futurist Yuval Noah Harari talking at London’s Royal Society of Arts about his new book, Homo Deus, I am wondering how a conversation might go between Harari and The Prisoner. Next year will be the 50th anniversary since the iconic television series first broadcast what has become one of the catchcries of science fiction: “I am not a number: I am a free man!” Five decades on, the Guardian review of Harari’s book is sub-titled “How data will destroy human freedom”.

A fundamental difference between Harari’s hugely successful Sapiens and his new book is that the former involves reflections on how humanity has made it this far, whereas the new title is a speculation on the future. The former is rooted in memory; the latter involves conjectures that shift on the sands of uncertain definitions, as the above-linked Guardian review of Harari’s latest book reveals. “Now just hold on there” moments abound, as for example:

Evolutionary science teaches us that, in one sense, we are nothing but data-processing machines: we too are algorithms. By manipulating the data we can exercise mastery over our fate.”

Without having the peculiarities of that “one sense” explained, it is hard to absorb the meaning of words like “nothing”, “manipulating” and “mastery”. Words matter, of course, and there are perils attendant upon concluding too much about human identity through the links that are implicit in lazily assumed definitions.

What happens to the god-fearing woman when she discovers there is no God? Is the workingman bereft when there is no longer any work? If people refuse to accept the imprisonment of numbers assigned to them by other people, are they thus necessarily free? How much is freedom determined not by actions, but by thoughts? And critically: if our thinking is clearer and more careful, can we be more free?

In the Q&A that concluded the RSA event, Harari missed an opportunity when he was asked about the future prospects of education. What will we teach children in the data-driven future of Super Artificial Intelligence? Interestingly, neither maths nor sciences got a mention, and it seemed we might just have to see when the future arrives. But it must be true that a far higher standard in teaching reasoning and critical skills will be essential unless humanity would contemplate an eternal bedlam of making daisy chains and listening to Donovan.

Choose: your country back or the future now

It has been a summer of unworthy frenzies, with the forces of conservatism and pessimism crying to have their country back or made great again. On the other side, characterised by the throwbacks as themselves the champions of “Project Fear”, were those who deny that mankind is on a doomed course. More positively, more thoughtfully: they remain adherents to a belief in the powers of education, clear thinking and focused choices. Of moving forward, and not back to our future.

One of the more frequently referenced books in recent weeks has been Progress: Ten Reasons to Look Forward to the Future, by Cato Institute senior fellow Johan Norberg. Favourably cited by Simon Jenkins in The Guardian, and by an editorial in The Economist, Norberg’s book sets out the case for how much, and how rapidly, the world is improving – at least from the perspective of its human masters; how much the case for atavistic pessimism is fed by ignorance and greed (much of it encouraged by a complicit media); and most inspiringly, how the brightness of our future is defined by the potential in our accumulating intelligence. The Economist piece concludes:

“This book is a blast of good sense. The main reason why things tend to get better is that knowledge is cumulative and easily shared. As Mr Norberg puts it, “The most important resource is the human brain…which is pleasantly reproducible.”

By timely coincidence, intelligence both human and artificial has weighed in over the past week with positive expectations on our future. An article in MIT Technology Review is entitled “AI Wants to Be Your Bro, Not Your Foe” – possibly unsettling for those who might see either alternative as equally unsavoury, but its heart is in the right place. It reports on a Stanford University study on the social and economic implications of artificial intelligence, and the currently launching Center for Human-Compatible Intelligence at UC Berkeley. Both initiatives are cognisant of the dangers of enhanced intelligence, but inspired by the vast potential in applying it properly.

For a shot of pure-grade optimism to finish, five inspiring applications of exponential technologies that lit up the recently concluded Singularity University’s Global Summit included one called “udexter”. Artificial intelligence is being deployed to address the challenges of unemployment arising from . . . the advances of artificial intelligence. It promises to counterbalance the decline in “meaningless jobs” by unleashing the world’s creativity.

Collaboration is the new competitive advantage

For years if not for decades, the buzzword in the worlds of innovation and enterprise, echoing through the lecture theatres of business schools and splattered across white boards in investment fund and marketing companies across the globe, has been disruption. Given the evolving world of exponential growth in computing power, artificial intelligence and deep learning, it may well be that the status of has-been is what this awful word “disruption” will soon occupy. Of course it will still exist, but as a by-product of unprecedented advances rather than a reverently regarded target of some small-minded zero-sum game.

Consider an article that appeared a few days ago on the Forbes website, asking rhetorically if the world-shifting changes that humanity requires, and to a large extent can reasonably expect to see, are likely to be achieved with a mind-set that is calibrated to “disrupting” existing markets. For each of the described challenges – climate change, energy storage, chronic diseases like cancer, and the vast horizons of potential in genomics, biosciences, and immunotherapies – collaboration rather than competition is emerging as the default setting for humanity’s operating system.

Mental models based upon cooperative networks will begin replacing the competitive hierarchies that only just managed to keep the wheels on the capitalist engine as it clattered at quickening speeds through our Industrial age, lurching from financial crisis to military conflict and back again, enshrining inequalities as it went and promoting degradations of Earth’s eco-system along the way. And why will things change?

Of course it would be nice to think that our species is at last articulating a more sustaining moral sense, but it won’t be that. It will simply be that the explosion of data, insatiable demands upon our attention, and the febrile anxieties of dealing with the bright white noise of modern life will render our individual primate brains incapable of disrupting anything remarkable to anything like useful effect.

The Forbes article concludes with admiration for what the digital age has been able to achieve, at least as long as the efficacy of Moore’s Law endured. It is soon to be superseded, however, by the emerging powers of quantum and Neuromorphic computing, with the consequent explosion of processing efficiency that will take our collective capabilities for learning and for thinking far beyond the imaginings of our ancient grassland ancestors.

Working together we will dream bigger, and disrupt far less than we create.

Consciousness is not as hard as consensus

Any review of the current writings on consciousness turns up the idea, sooner or later, that it is no longer the hard problem that philosopher David Chalmers labelled it two decades ago. There has been a phenomenal amount of work done on it since, much of it by people who seem pretty clear, in fact, on what it means to them. What is clearly much harder is getting them to agree with one another.

One of the greater controversies of this year was occasioned by psychologist Robert Epstein, declaring in an article in aeon magazine entitled The Empty Brain that brains do not in fact “process” information, and the metaphor of brain as computer is bunk. He starts with an interesting premise, reviewing how throughout history human intelligence has been described in terms of the prevailing belief system or technology of the day.

He starts with God’s alleged infusion of spirit into human clay and progresses through the “humours” implicit in hydraulic engineering to the mechanisms of early clocks and gears to the revolutions in chemistry and electricity that finally gave way to the computer age. And now we talk of “downloading” brains? Epstein isn’t having it.

Within days of his article’s appearance came a ferocious blowback, exemplified by the self-confessed “rage” of software developer and neurobiologist Sergio Graziosi’s response, tellingly entitled “Robert Epstein’s Empty Essay”. The earlier failures of worldly metaphor should not entail that computer metaphors are similarly wrong-headed, and Graziosi provides a detailed and comprehensive review of the very real similarities, his anger only sometimes edging through. He also provides some useful links to other responses to Epstein’s piece, for people interested in sharing his rage.

For all this, the inherent weakness in metaphor remains: what is like, illuminates; but the dissimilarities obscure. The brain’s representations of the external world are not designed for accuracy, but are evolved for their hosts’ survival. Making this very point, in a more measured contribution to the debate, is neuroscientist Michael Graziano writing in The Atlantic. “A New Theory Explains How Consciousness Evolved”, tackles the not-so-hard problem from the perspective of evolutionary biology.

He describes how his Attention Schema Theory would be evolution’s answer to the surfeit of information flowing into the brain – too much to be, he says, “processed”. Perhaps, in the context of the Epstein/Graziosi dispute, he might better have said “assimilated”. Otherwise, full marks and thanks to Graziano.