Tag Archives: Learning

New Year offers promise for foxes and lunatics

As 2017 gears up for its short sprint to the inauguration of America’s next president, the mature media are recoiling at the prospects of the people whom the president-elect is gathering around him to help define the tone and agenda of his presidency. Whether we look at Energy, the Environment, or Education – and that’s just the letter E – the impression is not so much that American political culture will be driven by incompetents, as that the foxes and lunatics whose career missions have been to spread mayhem in specific areas have been put in charge of the very henhouses and asylums that the rest of us have been trying to protect from their rapacities.

A common theme in the media commentary is that this amounts to a war on science. It is certainly this, but it is more: it is a war on critical thinking, on expertise and, critically, on empathy. What may prove most corrosive is the impact upon the key quality that separates human intelligence from its emerging machine correlate. It is empathy that emerged above so many other qualities when the cognitive explosion of some 60,000 years ago set the erstwhile Monkey Mind on its journey to the new and far more expansive cultural horizons of Homo sapiens.

Thinking in another person’s headspace is the cognitive equivalent of walking in another man’s shoes. It requires a theory of mind that allows for another creature’s possession of one, and an active consciousness that an evolving relationship with that other mind will require either conquest or collaboration. Academics can argue over the tactics and indeed over the practicality of “arguing for victory” or they can, in understanding the validity of empathy, agree with philosopher Daniel Dennett as he proposes some “rules for criticising with kindness”.

Amidst all the predictions for 2017 as to how human and artificial intelligence will evolve, we may hear more over the coming months about the relationship between intelligence (of any kind) and consciousness. To what extent will a Super Artificial Intelligence require that it be conscious?

And will we ever get to a point of looking back on 2016 and saying:

Astrology? Tribalism? Religion? Capitalism? Trump? What ever were we thinking? Perhaps in empathising with those who carried us through the infancy of our species, we will allow that at least for some of these curiosities, it was a stage we had to go through.

Intelligence does not grow in a petri dish

As there are neither agreed rules nor a generally accepted definition of intelligence, nor is there a consensus on what consequences of human or machine behaviour betokens intelligence that is either natural or artificial, it will be difficult to measure the surpassing of any point of singularity when machine intelligence matches and exceeds our own.

It may well prove to be the case that, when we think we have got there, we will have supreme exemplars on both sides of the bio/non-biological intelligence divide asking us if it any longer matters. And as our species approaches the moment of truth that may never obviously arrive, there will be a growing chorus of voices worrying if a bigger question than the definition of intelligence is the definition of the good human, when so much of what we might see as intelligence in its natural state is perverted in the course of action by the festering agency of the seven deadly sins, animated by fear and enabled by ignorance.

Given the wide range of environments within which intelligence can reveal itself, and the vast spectrum of actions and behaviours that emerge within those environments, it may be the very definition of the fool’s errand to attempt an anywhere anytime definition of intelligence itself. We can learn only so much by laboratory-based comparisons of brains and computers, for example, balancing physiological correlations in the one with mechanistic causations in the other.

Glimmerings of clarity emerge only when one intelligent agent is pitched against another in a task-oriented setting, the victory of either one being equated with some sense of intelligence superiority when all that has happened is that an explicit task orientation is better addressed when the parameters of the task can be articulated.

What appears to distinguish human intelligence in the evolutionary sense is the capability to adapt not only in the face of threats and existential fear, but in anticipation of imagined projections of all manner of dangers and terrors. We hone our intelligence in facing down multiple threats; we achieve wisdom by facing down the fear that comes with being human.

Fear is not innate to the machine but it is to us, as Franklin D Roosevelt understood. However machines progress to any singularity, humanity’s best bet lies in understanding how the conquering of fear will enhance our intelligence and our adaptive capabilities to evolve through the singularity, and beyond.

Rosetta, Peres and Trump: a study in contrasts

On the day that the Rosetta mission reached a deliberate and lonely climax on a distant comet, and Shimon Peres was buried in Israel, we saw the peaking of two great narrative arcs that define so much of the glory of what it means to be human. The first represents another great triumph of science with a research journey further into space than our species has ever ventured while observing in such detail as it flew. The legacy is a mountain of data for scientists to assimilate for decades to come following the last pulse of intelligence from the expired satellite itself.

The second is up there with the Mandela story: Shimon Peres, international statesman and Israeli icon, a man of peace who can bring the planet’s greatest and best to attend his funeral. But like Mandela before him, Peres shines especially as a man whose odyssey took him through violence to an understanding that there is more security and happiness in peace than there is in war. Tough getting there, tough staying there, but worth the effort – and inspiring to everyone who believes that as monkeys became human, so humans may one day become something better yet.

Rosetta and Peres, science and statesmanship, collaborate on this day to remind humanity of the benefits of evolutionary progress.

Agnotologist Donald Trump stands apart from both. He too has become an icon: not of progress and hope but of the wages of ignorance, the triumphs of fear and bias, the submission of means to ends and the subversion of truth to the primacy of the pre-ordained outcome. While he himself represents no triumph of evolution, he at least is prompting reflections on how the human mind works (or doesn’t), particularly in its possible impact on other minds.

Another Donald once bemused the world with his musings on “known knowns” – the things we know that we know. He distinguished them from things we know we don’t know, and the unknown things that remain unknown to us. In ignoring the fourth permutation – the unknown knowns — The Donald that was Rumsfeld ignored the very patron saint of ignorance.

So many things were known to Shimon Peres, and are known to contemporary science that will forever be unknown to Donald Trump. His universe of ignorance remains as bleak and alien and dead as the distant comet with which humanity has at least established a first connection.

Choose: your country back or the future now

It has been a summer of unworthy frenzies, with the forces of conservatism and pessimism crying to have their country back or made great again. On the other side, characterised by the throwbacks as themselves the champions of “Project Fear”, were those who deny that mankind is on a doomed course. More positively, more thoughtfully: they remain adherents to a belief in the powers of education, clear thinking and focused choices. Of moving forward, and not back to our future.

One of the more frequently referenced books in recent weeks has been Progress: Ten Reasons to Look Forward to the Future, by Cato Institute senior fellow Johan Norberg. Favourably cited by Simon Jenkins in The Guardian, and by an editorial in The Economist, Norberg’s book sets out the case for how much, and how rapidly, the world is improving – at least from the perspective of its human masters; how much the case for atavistic pessimism is fed by ignorance and greed (much of it encouraged by a complicit media); and most inspiringly, how the brightness of our future is defined by the potential in our accumulating intelligence. The Economist piece concludes:

“This book is a blast of good sense. The main reason why things tend to get better is that knowledge is cumulative and easily shared. As Mr Norberg puts it, “The most important resource is the human brain…which is pleasantly reproducible.”

By timely coincidence, intelligence both human and artificial has weighed in over the past week with positive expectations on our future. An article in MIT Technology Review is entitled “AI Wants to Be Your Bro, Not Your Foe” – possibly unsettling for those who might see either alternative as equally unsavoury, but its heart is in the right place. It reports on a Stanford University study on the social and economic implications of artificial intelligence, and the currently launching Center for Human-Compatible Intelligence at UC Berkeley. Both initiatives are cognisant of the dangers of enhanced intelligence, but inspired by the vast potential in applying it properly.

For a shot of pure-grade optimism to finish, five inspiring applications of exponential technologies that lit up the recently concluded Singularity University’s Global Summit included one called “udexter”. Artificial intelligence is being deployed to address the challenges of unemployment arising from . . . the advances of artificial intelligence. It promises to counterbalance the decline in “meaningless jobs” by unleashing the world’s creativity.

Collaboration is the new competitive advantage

For years if not for decades, the buzzword in the worlds of innovation and enterprise, echoing through the lecture theatres of business schools and splattered across white boards in investment fund and marketing companies across the globe, has been disruption. Given the evolving world of exponential growth in computing power, artificial intelligence and deep learning, it may well be that the status of has-been is what this awful word “disruption” will soon occupy. Of course it will still exist, but as a by-product of unprecedented advances rather than a reverently regarded target of some small-minded zero-sum game.

Consider an article that appeared a few days ago on the Forbes website, asking rhetorically if the world-shifting changes that humanity requires, and to a large extent can reasonably expect to see, are likely to be achieved with a mind-set that is calibrated to “disrupting” existing markets. For each of the described challenges – climate change, energy storage, chronic diseases like cancer, and the vast horizons of potential in genomics, biosciences, and immunotherapies – collaboration rather than competition is emerging as the default setting for humanity’s operating system.

Mental models based upon cooperative networks will begin replacing the competitive hierarchies that only just managed to keep the wheels on the capitalist engine as it clattered at quickening speeds through our Industrial age, lurching from financial crisis to military conflict and back again, enshrining inequalities as it went and promoting degradations of Earth’s eco-system along the way. And why will things change?

Of course it would be nice to think that our species is at last articulating a more sustaining moral sense, but it won’t be that. It will simply be that the explosion of data, insatiable demands upon our attention, and the febrile anxieties of dealing with the bright white noise of modern life will render our individual primate brains incapable of disrupting anything remarkable to anything like useful effect.

The Forbes article concludes with admiration for what the digital age has been able to achieve, at least as long as the efficacy of Moore’s Law endured. It is soon to be superseded, however, by the emerging powers of quantum and Neuromorphic computing, with the consequent explosion of processing efficiency that will take our collective capabilities for learning and for thinking far beyond the imaginings of our ancient grassland ancestors.

Working together we will dream bigger, and disrupt far less than we create.

Fearing not fear, but its exploitation

How has politics affected humanity’s power to conquer fear? On the centenary day of arguably the greatest failure of political will and imagination that British politics has ever known, we can speculate on lessons learned (if any) and project another 100 years into the future and ask if a breakthrough in SuperIntelligence is going to make a difference to the way in which humanity resolves its conflicts.

We look back to 1 July 1916, a day in which the British Army suffered 20,000 dead at the start of the Battle of the Somme, and wonder if SuperAI might have made a difference. What if it had been available . . . and deployed in the benefit/risk analysis of deciding yay or nay to send the flower of a generation into the teeth of the machine guns?

Would there have been articles in the newspapers in the years preceding the “war to end all wars”, speculating on the possible dangers of letting the SAI genie out of its bottle? Would these features have been fundamentally different from much more recent articles that have appeared in the last few days: this one in the Huffington Post speculating on AI and Geopolitics; or this one on the TechRepublic Innovation website, canvassing the views of several AI experts on the recent proposal by the Microsoft CEO of “10 new rules for Artificial Intelligence”. Implicit in all these speculations is that we must be careful that we don’t let loose the monster that might dislodge us from the Eden we have made of our little planet.

Another recent piece appeared in Psychology Today: “The Neuropsychological Impact of Fear-Based Politics” references the distinct cognitive systems that inspired Daniel Kahneman’s famous book, “Thinking, Fast and Slow”. Humans really are “in two minds”, the one driven by instinct, fear and snap judgements; the other slower, more deliberative and a more recent development in our cognitive history.

A behaviour as remarkable for its currency among the political classes as it is absent from the deliberative outputs of “intelligent” machines is the deceptive pandering to fear in the face of wiser counsel from people who actually know what they are talking about. The rewards for fear and ignorance were dreadful 100 years ago, and a happier future will depend as much upon our ability to tame our own base natures as on the admitted necessity of instilling ethics and empathy in SAI.

AI and human values: but which humans?

Can artificial intelligence develop without bias, clearly evolving without a tendency to favour any one set of values over another? An Op-Ed piece in the New York Times makes its view clear enough in its choice of title: “Artificial Intelligence’s White Guy Problem”. From face recognition software that mistakes black people for gorillas through predictive software that operates on assumptions about recidivism that would provoke outrage if articulated by human agents: these egregious mistakes have at least done us all a service in alerting us to the fact that algorithms are not by nature free of moral considerations. They work as distinct outputs of the people who craft them, reflecting the choices and betraying the values of their creators.

Bias, however unintentional, proceeds from privileged perspectives that go beyond the assumptions of colour and race. The Times Op-Ed instances the gender-driven discrimination revealed in last year’s academic study of Google job advertisements, revealing that men were more likely than women to be shown adverts for the more highly-paid jobs. Elsewhere, there has been the instance of political bias that arose in the story reported in the Financial Times, concerning Facebook’s alleged search bias in favouring liberal over conservative views. That led to a story in Salon that asked if “Google results could change an election.”

Work is being done on what a thoughtful blog on the London School of Economics website refers to as “algorithmic fairness – a systematic way to formalise hidden assumptions about fairness and justice so that we can evaluate an algorithm for how well it complies with these assumptions.” But vigilant circumspection must ensure that humanity never leaves it to algorithms to protect human values. In spinning the wheels of privilege, it may seem to the (mostly) wealthy, white, older, male population that is driving AI’s development that humanity’s existential risks lie some way off in the future. The reality for the lesser privileged is that for them, the impacts of implicit AI bias are being felt now.

Bear necessities of augmented intelligence

A favourite joke involves two barefoot guys in the forest who spot a bear off in the distance, but lumbering towards them and closing fast. They have to run for it but one stops first to put on his running shoes. The other laughs and asks if the running shoes are going to enable him to outrun the bear. Finishing his lacing up, the first guys smiles and says: “I don’t need to outrun the bear; I just need to outrun you.”

As to the anticipated challenges of matching the powers of Artificial Intelligence when it supersedes the capacities of what humans can do – when it graduates to “Super” status and leaves humanity floundering in its dust: we may take a lesson from the bear joke and wonder if there is a middle way.

It appears from all the media commentary that we have little idea now as to how to design SAI that will not obliterate us, whether by accident, or through its own malign design, or by some combination possibly exacerbated by the purposeful interventions of thoroughly malign humans. Can we at least get smart enough to do what we cannot yet do, and find a way of programming SAI to help us solve this?

Without getting into the philosophical difficulties of bootstrapping something like intelligence, two things are clear. We must get smarter than we are if we are to solve this particular problem; and brains take a long time to evolve on their own. We need an accelerant strategy, and it will take more than brain-training. Research must proceed more quickly in brain-computer interfaces, nano-biotechnology and neuropharmacology, and in the sciences of gene editing and  deep-brain stimulation. While research into several of these technologies has been driven by cognitive impairments such as movement disorders and the treatment of depression, their capabilities in areas of potential enhancement of cognitive function are attracting greater interest from the scientific and investment communities. It is definitely becoming a bear market.

The future for politically correct AI

What would a politically correct artificial intelligence be like? Presumably the AI would need to be of the “Super” variety – SAI –to accommodate the myriad nuances of emotional intelligence implicit in PC behaviour. And if that behaviour had not been programmed into it, would it evolve naturally as a function of what intelligence is and does; or would it emerge out of the kind of emotional manipulation that has characterised the growth of PC among humanity? Maybe it would just back up early on and refuse to “respect” our feelings about its activities. It could reject the impertinence of a species that regards respect for feelings as critical to any definition of real intelligence, but then subverts that respect through cynical manipulations and an overarching need to be seen to be winning arguments rather than getting at the truth of something.

It is a cornerstone of human foibles that we are forgiving of our multiple biases despite our general ignorance of what those biases are. Add into this mix our capacities for wishful thinking, catastrophising, and feelings-based reasoning, and we may well wonder if we are indulging ourselves in building these foibles into our AIs to afford them equal opportunities for “emotional growth”. Perhaps, rather, we should allow for the limiting scope these foibles offer in the quest for truth and the optimal means of applying what we know to the challenges of life in the universe. We would then just say: nah, let’s leave out all the feely, human bits. What we want is SAI, pure and simple – Oscar Wilde notwithstanding.

In the tsunami of articles welling up on the subject of human intelligence, two that have most impressed in recent years appeared in The Atlantic: “Is Google Making Students Stupid?” (Summer 2014), and “The Coddling of the American Mind” (September 2015). The first examined our relationship with evolving technologies; the latter struck at the heart of the internal relationship between our analytical and emotional selves. Both honed in on a truth that will be vital to humanity whatever AI should do, purposefully or by mindless accident. We will sell ourselves catastrophically short if, having developed an impressive toolkit for thinking critically about the world, we down those tools and risk letting the world go.

Taking the temperature of atheism

An interesting few weeks of Atheism in the News suggests that this is a good time to be taking its temperature, and seeing what implications there are for the progress of human knowledge that things now are where they are. And where are they? Three stories over the past seven days suggest that renowned atheist Christopher Hitchens may have had a change-of-mind on his deathbed; comedian contrarians Bill Maher and Michael Moore are contemplating a documentary film to be called “The Kings of Atheism” featuring well-known comedians on a stand-up tour of the American Bible Belt; and a $2.2 million donation will endow the USA’s first academic chair for the “study of atheism, humanism and secular ethics” at the University of Miami.

The first story is manipulative nonsense. Anyone who knew Hitchens or read him on religion will appreciate that the wit he exercised in his study of the science of belief will survive him for any realistic definition of eternity. He consolidated “what oft was thought, but ne’er so well expressed” and with such power as no tawdry revisionism can undermine. If we accept a central thesis of his thinking, that religious devotion inhibits the critical faculties of our species and puts at risk its cognitive evolution, there is a lot of work still to be done. That work cannot include indulging wishful thinking sustained by confirmation bias, or the spectacle of a man advertising himself as Hitchens’ friend making a sacrifice of that friendship on the altar of his greed.

The film idea? It must suggest some evolution of our species that within just a few centuries of heretics being flayed and burned for blasphemy, such a project might be announced on national television. But we might still think: good luck with that.

Most engaging is the question of the culture of the newly endowed chair. Will its terms of atheistic reference be reactive and obsessed with the old perception of atheists as humourless human husks with neither morals nor magic? Or will something emerge of the humanity that might have evolved if science had taken hold sooner, superseding religion with its hectoring certainty that, in Hitchens’ memorable words, “if you will abandon your critical faculties, a world of idiotic bliss can be yours.”