Tag Archives: Superintelligence

New Year offers promise for foxes and lunatics

As 2017 gears up for its short sprint to the inauguration of America’s next president, the mature media are recoiling at the prospects of the people whom the president-elect is gathering around him to help define the tone and agenda of his presidency. Whether we look at Energy, the Environment, or Education – and that’s just the letter E – the impression is not so much that American political culture will be driven by incompetents, as that the foxes and lunatics whose career missions have been to spread mayhem in specific areas have been put in charge of the very henhouses and asylums that the rest of us have been trying to protect from their rapacities.

A common theme in the media commentary is that this amounts to a war on science. It is certainly this, but it is more: it is a war on critical thinking, on expertise and, critically, on empathy. What may prove most corrosive is the impact upon the key quality that separates human intelligence from its emerging machine correlate. It is empathy that emerged above so many other qualities when the cognitive explosion of some 60,000 years ago set the erstwhile Monkey Mind on its journey to the new and far more expansive cultural horizons of Homo sapiens.

Thinking in another person’s headspace is the cognitive equivalent of walking in another man’s shoes. It requires a theory of mind that allows for another creature’s possession of one, and an active consciousness that an evolving relationship with that other mind will require either conquest or collaboration. Academics can argue over the tactics and indeed over the practicality of “arguing for victory” or they can, in understanding the validity of empathy, agree with philosopher Daniel Dennett as he proposes some “rules for criticising with kindness”.

Amidst all the predictions for 2017 as to how human and artificial intelligence will evolve, we may hear more over the coming months about the relationship between intelligence (of any kind) and consciousness. To what extent will a Super Artificial Intelligence require that it be conscious?

And will we ever get to a point of looking back on 2016 and saying:

Astrology? Tribalism? Religion? Capitalism? Trump? What ever were we thinking? Perhaps in empathising with those who carried us through the infancy of our species, we will allow that at least for some of these curiosities, it was a stage we had to go through.

After the election: recovering from cancer and “the other stuff”

Following a newspaper feature by conservative commentator George Will, speculating that Donald Trump might serve in history as the American Republican Party’s “chemotherapy”, an oncologist writing in the online magazine Salon reminded his readers that people with experience of the various treatments available for cancer will recognise that chemotherapy is almost invariably a pretty tough gig.

The idea certainly provides for a thought-provoking metaphor, however, not least as chemo seldom does much good for the cancer’s host, while along the way it ravages the body and soul of the patient every bit as much, and sometimes more so, than it tackles the cancer itself. But perhaps the point of the metaphor is to erect the credible claim that the aftermath of the election, given something like an even modestly clear win for Clinton, will enable the GOP to survive and carry on with the bromide that it was Trump’s noxious temperament that lost it for them: but the policies themselves were sound.

This would be a false premise. Not only are the GOP’s policies, broadly considered, not sound, but they have consolidated their appeal over several decades among a now noticeably declining voter demographic largely comprising of angry and less well-educated white males. But it is not in fact the policies (on either side) that have defined the leit motif of this election as much as the degenerate, juvenile, and poisonous atmosphere that has evolved around any articulation or community discussion of those policies.

Messages have been subverted by the increasing puerility of the mediums, or media, and people have quite simply and very largely been repelled by the whole stinking and thoroughly demeaning process.

In looking beyond the result of the election to its aftermath – potential armed unrest, possible litigation from losing candidates, lingering and truly cancerous rancour eating into the body politic of Washington culture for many years to come – it is useful to stick with the chemotherapy metaphor in examining several key themes that might have attracted the media’s attention over the course of a horribly protracted campaign, but did not. For the fact remains that whatever causes a cancer exists independently of any therapy; the cancer can metastasise; and while the quality of life can become increasingly uncertain as life carries on, the patient is still obliged to “get on with other stuff” as the cancer and/or the treatment progresses.

If today’s news is telling us that “one fifth of cancer patients (in the UK) face workplace discrimination”, what will tomorrow’s news tell us about the future prospects of the American body politic given the enervating drain upon its vitality by two years of a news cycle dominated by one deeply flawed, if not downright tumorous, human being? What is the “other stuff” that the world will have to be getting on with while it deals with the aftermath of the 2016 American election: addressing not only the tumour but the conditions that caused it and the likelihood of any metastasis?

The University of Cambridge’s Centre for the Study of Existential Risk (CSER) has a page on its website devoted to what it sees as the major risks to humanity, defined as such by their threat to the future of the human species and categorised in four broad groupings. Only one of these categories might with some generosity be seen as having been addressed over the course of the election campaign.

It would still be something of a stretch of the imagination to articulate how Donald Trump established any policy position on “Systemic risks and fragile networks”, which CSER defines as the tensions emerging between growing populations, pressures on resources and supply chains, and the technologies that are arising to address the challenges of a global eco-system increasingly defined by its interconnectedness. Trump would point out the systemic shortcomings of American trade negotiators historically unburdened by his vision and experience. As the candidate who actually possesses knowledge and experience of the nuances in balancing risk and reward in this area, Hillary Clinton at least had her views constantly at the ready whenever the media tired of asking her about her emails.

Heading the CSER list of existential risks and often cited by scientists, futurists and politicians as the greatest risk now afflicting the planet is what CSER terms “Extreme risk and the global environment” – known colloquially as climate change. Whatever the consensus among people who actually know what they are talking about, a significant proportion of the broader American public is disinclined to recognise that this problem even exists. The tools of evidence and critical thinking being largely Greek to this wider population, the American media clearly felt the whole subject to be too recondite to be engaging with the science-deniers in a language they couldn’t understand. Trump certainly couldn’t, and the media largely gave him a pass on this.

Most remarkably, the other two categories of risk on the CSER website were virtual no-go areas for both presidential campaigns and the media whose task it might have been to interrogate them if only they had the slightest inkling of the exponential pace of change that will define humanity’s progress in the coming years of the 45th president’s term of office. At some stretch, consideration of the “Risks from biology: natural and engineered” might be seen to feature in the work of Planned Parenthood and its vital work in the areas of female public health and human reproduction. But here Trump was in thrall to the fruitcake wing of the Republican party and, as this was one of the few areas in which candidate and party were in lockstep agreement, he was happy to blunder into embarrassing policy positions that were consistently and constantly undermined by Clinton’s expertise, her experience, her commitment to the cause and, finally, to simple and understandable gender solidarity.

Given the gap between the candidates on issues of female biology – not to mention the publicity given to Trump’s history of obsession with female sexuality stopping well short of the time that reproduction becomes an issue – this was possibly the area of policy discussion that has left the progressive media nonplussed that this election could ever have been run so close. In any case, the wider issues of existential risk and benefits relating to genetics, synthetic biology, global pandemics, and antibiotic resistance scarcely got a look-in over the course of the election’s somewhat onanistic “news cycle”.

Most tellingly, Artificial Intelligence hasn’t featured very much at all in this election. This is especially alarming given the final summary sentence on the CSER website section that addresses this particular area of risk: “With the level of power, autonomy, and generality of AI expected to increase in coming years and decades, forward planning and research to avoid unexpected catastrophic consequences is essential.” The silence has been deafening.

For all the speculation about what so-called Super Artificial Intelligence may mean some decades hence at the point of the “Singularity” — the thought-experimented point where machine/computer intelligence matches and then exponentially speeds past the capabilities of human intelligence — the real story now, in 2016, is almost as startling as it is inspiring.

In this year when “human” intelligence is grappling feverishly with a presidential choice between one candidate who has been careless with her email and another who is a self-professed sex pest and the most dangerous sort of conman (simultaneously large on attitude but bereft of a clue), this year alone has seen considerable advances in the capabilities of Artificial Intelligence, both for worse and for better.

The downsides include the possible misuse of private and commercial data, the increasing potential for fraud, and the threat of AI-directed/autonomous weapons systems. The upsides include faster and more efficient medical research, advances in virtual and augmented reality, safer cities through self-driving vehicles and infinitely more detailed intelligence-gathering on the workings of biology, chemistry, physics, and cosmology. In short, the wider universe is opening before our wondering eyes.

What is worrying amidst this quickening pace of AI technology is that the sort of circumspection we see articulated in media articles like this recent piece in TechCrunch is, first of all, not being reflected in wider public discussions incited by the American election. Second, there is no evidence that more frequent calls for ethical reflection on the challenges of AI might see progress in the ethical sphere keeping pace with developments in the AI science. This prompts at least three pretty obvious questions.

On the longer time horizon, as we contemplate a possible Singularity, what do we imagine that an emerging and self-conscious SuperAI might make of its human progenitor? If we have filled the intervening decades with steadfast ignoring of our existential threats, ever complacent about the real and enduring achievements of human imagination, and yet determined to elect our future leaders according to the bottom-feeding precedents suppurating forth from this week’s debasement of democracy, could any intelligence – human or “artificial” – be surprised if the post-Singularity machine should decide that man and monkey might share a cage?

In the medium-term, we might galvanise an appropriate response to the above paragraph by imagining what progress we might make over the next four years, given what has happened just over the course of 2016. Will the wise heads of 2020 be looking at that year’s American election in prospect and wondering how much more deliberation will be inspired by the questions so woefully ignored this year?

Specifically, will humanity have come to grips with the technological and ethical issues associated with the increasing pace of AI development, and craft their appreciation of that year’s slate of candidates on the basis of more intelligent policy positions on support for technology, for education in the sciences and in the absolute necessity for our species to evolve beyond its biases and primal fears in the application of critical thinking and greater circumspection as we prise open a deeper understanding of our relationship with the cosmos we look set to join?

Which brings us to the short-term question: if we are to attain the promontories of wisdom implicit in addressing those challenges of the medium term, what do we have to start doing next week, next month, and throughout 2017? If we are to overcome the toxic and cancerous experiences of 2016, what are the fundamentals among “the other stuff” that we will need to address? What must we do to ensure that 2020 finds us clear-sighted and focused on the distant Singularity as a point of possibly quantum enhancement of human potential, rather than a humiliating relegation to the monkey cage?

By no means a comprehensive wish-list, or even sufficient in themselves for guaranteeing the progress of our species to that quantum gate, these twin areas of focus are proposed as at least being necessary areas for reflection given the impact of their collective absence over these last unnecessarily anxious and ghastly 18 months.

First, keep it real: celebrate intelligence. We must not surrender to the pornography of simulation. Cyberspace has echoed with the cries of commentators decrying the ascendance of reality television over the dynamics of real life. The CBS CEO who admitted that the Trump campaign may not be good for America, but is “damn good for CBS” might prompt a useful debate on what the media are for. And he would not say it if people were not watching him, so another debate is necessary on how to encourage people to keep up with scientific progress as much as they keep up with the Kardashians. We need more public respect for science and for the primacy of evidence; and less indulgence of bias and political determinations driven by faith.

And as a sub-text to the promotion of intelligence, the organisers of presidential campaigns might reflect upon their role as custodians of the democratic process when they consider how best, and for how long, the 2020 campaign might proceed. Is an adversarial and protracted bear-baiting marathon an optimal way of revealing the candidates’ strengths and educating the public, or is it okay that it’s deemed to be damn good for the boss of CBS?

Finally, the American Republican Party is in need of a re-boot. To finish where we set out with a thought for what might be good for what ails it if trumped up chemotherapy should fail: are they clear on their voter demographic’s direction of travel for the next four years, given what’s going on in the world? This same question applies to any government that would profit from enduring xenophobia or from exploiting atavistic bias and resolute ignorance. There is only so much to be gained by gerrymandering and pandering to inchoate fears, and no credit at all in impugning any authority to which the cynical seeks election.

And there is absolutely no glory in taking countries back, or “making them great again”. Humanity reaches out, it moves forward, and looks up.

How does consciousness evaluate itself?

If “writing about music is like dancing about architecture”, perhaps the attempt to reflect conclusively on consciousness is like the old picture of Baron Munchausen trying to pull himself out of a swamp by his own pigtail. Despite the usual carpings in the commentary whenever any serious thinking is done online (gosh, if only the author had consulted with me first . . .) an article in Aeon Magazine by cognitive robotics professor Murray Shanahan of London’s Imperial College makes some important distinctions between human consciousness and what he terms “conscious exotica”. The key question he poses is summed up in the sub-headline: “From algorithms to aliens, could humans ever understand minds that are radically unlike our own?”

It’s a great question, even without wondering how much more difficult such an understanding must be when it eludes most of us even in understanding minds very much like our own. Shanahan sets out from a premising definition of intelligence as what it is that “measures an agent’s general ability to achieve goals in a wide range of environments”, from which we can infer a definition of consciousness as what it is when the measuring agent is the agent herself.

From there, Shanahan works up a spectrum of consciousness ranging from awareness through self-awareness, to empathy for other people and on to integrated cognition, wondering along the way if the displayed symptoms of consciousness might disguise distinctions in the internal experience of consciousness between biological and non-biological beings. The jury will remain out on the latter until Super AI is upon us, but reflections on the evolution of biological consciousness prompt another thought on the process of evolution itself.

There is nothing absolute about human consciousness. We are where we are now: our ancient ancestors might have gawped uncomprehendingly at the messages in White Rabbits, Lucy in the Sky with Diamonds and the rest of them, but the doors that were opened by the 60s counterculture were less about means than about ends. Enhanced consciousness was shown to be possible if not downright mind-blowing. We in our time can only gawp in wondrous anticipation of what future consciousness may tell us about all manner of things, including even and possibly especially dances about architecture.

“Teach me half the gladness that thy brain must know, Such harmonious madness From my lips would flow The world should listen then, as I am listening now.” — Shelley”s To a Skylark

Not being a number does not make you a free man

Having listened last week to futurist Yuval Noah Harari talking at London’s Royal Society of Arts about his new book, Homo Deus, I am wondering how a conversation might go between Harari and The Prisoner. Next year will be the 50th anniversary since the iconic television series first broadcast what has become one of the catchcries of science fiction: “I am not a number: I am a free man!” Five decades on, the Guardian review of Harari’s book is sub-titled “How data will destroy human freedom”.

A fundamental difference between Harari’s hugely successful Sapiens and his new book is that the former involves reflections on how humanity has made it this far, whereas the new title is a speculation on the future. The former is rooted in memory; the latter involves conjectures that shift on the sands of uncertain definitions, as the above-linked Guardian review of Harari’s latest book reveals. “Now just hold on there” moments abound, as for example:

Evolutionary science teaches us that, in one sense, we are nothing but data-processing machines: we too are algorithms. By manipulating the data we can exercise mastery over our fate.”

Without having the peculiarities of that “one sense” explained, it is hard to absorb the meaning of words like “nothing”, “manipulating” and “mastery”. Words matter, of course, and there are perils attendant upon concluding too much about human identity through the links that are implicit in lazily assumed definitions.

What happens to the god-fearing woman when she discovers there is no God? Is the workingman bereft when there is no longer any work? If people refuse to accept the imprisonment of numbers assigned to them by other people, are they thus necessarily free? How much is freedom determined not by actions, but by thoughts? And critically: if our thinking is clearer and more careful, can we be more free?

In the Q&A that concluded the RSA event, Harari missed an opportunity when he was asked about the future prospects of education. What will we teach children in the data-driven future of Super Artificial Intelligence? Interestingly, neither maths nor sciences got a mention, and it seemed we might just have to see when the future arrives. But it must be true that a far higher standard in teaching reasoning and critical skills will be essential unless humanity would contemplate an eternal bedlam of making daisy chains and listening to Donovan.

Collaboration is the new competitive advantage

For years if not for decades, the buzzword in the worlds of innovation and enterprise, echoing through the lecture theatres of business schools and splattered across white boards in investment fund and marketing companies across the globe, has been disruption. Given the evolving world of exponential growth in computing power, artificial intelligence and deep learning, it may well be that the status of has-been is what this awful word “disruption” will soon occupy. Of course it will still exist, but as a by-product of unprecedented advances rather than a reverently regarded target of some small-minded zero-sum game.

Consider an article that appeared a few days ago on the Forbes website, asking rhetorically if the world-shifting changes that humanity requires, and to a large extent can reasonably expect to see, are likely to be achieved with a mind-set that is calibrated to “disrupting” existing markets. For each of the described challenges – climate change, energy storage, chronic diseases like cancer, and the vast horizons of potential in genomics, biosciences, and immunotherapies – collaboration rather than competition is emerging as the default setting for humanity’s operating system.

Mental models based upon cooperative networks will begin replacing the competitive hierarchies that only just managed to keep the wheels on the capitalist engine as it clattered at quickening speeds through our Industrial age, lurching from financial crisis to military conflict and back again, enshrining inequalities as it went and promoting degradations of Earth’s eco-system along the way. And why will things change?

Of course it would be nice to think that our species is at last articulating a more sustaining moral sense, but it won’t be that. It will simply be that the explosion of data, insatiable demands upon our attention, and the febrile anxieties of dealing with the bright white noise of modern life will render our individual primate brains incapable of disrupting anything remarkable to anything like useful effect.

The Forbes article concludes with admiration for what the digital age has been able to achieve, at least as long as the efficacy of Moore’s Law endured. It is soon to be superseded, however, by the emerging powers of quantum and Neuromorphic computing, with the consequent explosion of processing efficiency that will take our collective capabilities for learning and for thinking far beyond the imaginings of our ancient grassland ancestors.

Working together we will dream bigger, and disrupt far less than we create.

Thinking about thinking, and multi-coloured hats

A sudden immersion in the world of end-of-term report cards brought me face to face last week with a note of my grand-daughter’s ability to “empathise with characters using red hat thinking”. Ignoring my own pedantic grimace at the syntactical implication that the application of “red hat thinking” might be a curriculum objective, I passed quickly through the bemusement that the thoughts of Edward de Bono have so passed into the vernacular of North London primary schools that they are referenced in lower case, and moved on to engage in a little blue-sky thinking of my own. How would a Super Artificial Intelligence (SAI) look wearing de Bono’s Six Thinking Hats?

A flick through the colour descriptions suggests that the thinking being considered has little to do with neurological processes or activities of mind, and is employed more in the colloquial sense of applying “new ways of thinking” that are lateral, or outside the box, or indeed even blue-sky. They betoken attitudes and at best establish distinctions that may be useful in achieving a result, getting the sale, appreciating an alternative point of view, or changing a mind.

So, in summary, we focus on the facts (white), think positive (yellow), pick holes (black), empathise (red), blue-sky (green?) and then consolidate the process (blue). While the metaphor gets a bit unwieldy towards the end – perhaps the blue sky should really be blue, and the process of fertilising and growing the end result should be green – It still leaves the question to play with: what would SAI do with these hats? After all, is it reasonable to suppose that if human intelligence evolved through a consciousness that manifested these attitudes, a machine intelligence might evolve in a similar way? And if it did, how would it get on with all this headgear?

Bearing in mind the extent to which AI starts from a programmed foundation for which no hats are required, in any sense, and evolves into SAI through an emerging capacity to enhance its own potency, it’s hard to see how any of these hats will matter except insofar as a programmed requirement to get along with humans is retained. The binary distinction of white and black would probably keep those hats in play. But in the link above, we note that the black hat (Judgement) is described as “probably the most powerful and useful . . . but a problem if overused.”

Religion is impeding our cognitive development

An article in today’s Guardian wonders if, with the accession of the UK’s new Prime Minister, Theresa May, the Conservative Party might be “doing God” again. The writer ponders on the sort of God this may be, suggesting that recent shifts in government policy may reflect an evolution in the culture beyond getting fussed about people’s sexual behaviour, focusing more on issues of social justice. The article does not comment on the possibility that some, or possibly all, of these issues might articulate an effectively moral direction without any assistance from scripture.

Humanity is maturing, leaving the Bible behind with its atavistic obsession with controlling promiscuity (“Is God a silverback?”, indeed). The quiet determinations of science continue to reveal wonders in creation beyond the imaginings of our comparatively ignorant ancestors of two millennia ago, although there is no shortage of efforts to reverse engineer those imaginings for the amazement of the gullible. Witness the consternation of America’s Bill Nye (the “Science Guy”) when he recently visited a recreation of Noah’s Ark in Kentucky. It seems that the price of progress is still vigilance.

Back in the evolving world, we are about to witness a quantum enhancement in human intelligence that may exceed in its impact what the evolution of vision appears to have accomplished in the Cambrian explosion of 545 million years ago, according to this Financial Times feature on a stunning new exhibition at London’s Natural History Museum.

It is hard to see what formally organised religion might contribute to all of this going forward. It will not be enough to maintain a charade that a focus on good works, social justice and community cohesion is sufficient when any of those activities could as easily be pursued for their own sakes. What is more troubling is the potentially retardant effect of embracing the cognitive dissonance that comes with cherry-picking what is estimable from holy texts while accommodating in the darker recesses of our minds the egregious bits of a belief system that, to put it mildly, has outlasted its credibility.

What sort of brain do we wish to bequeath to our generations to come? If there is to be a new Cambrian-style explosion in what the human brain can do, it will not be aided by clinging on to the intellectually untenable while denying the means by which we may grasp new ways of knowing, and thinking, and becoming.

Fearing not fear, but its exploitation

How has politics affected humanity’s power to conquer fear? On the centenary day of arguably the greatest failure of political will and imagination that British politics has ever known, we can speculate on lessons learned (if any) and project another 100 years into the future and ask if a breakthrough in SuperIntelligence is going to make a difference to the way in which humanity resolves its conflicts.

We look back to 1 July 1916, a day in which the British Army suffered 20,000 dead at the start of the Battle of the Somme, and wonder if SuperAI might have made a difference. What if it had been available . . . and deployed in the benefit/risk analysis of deciding yay or nay to send the flower of a generation into the teeth of the machine guns?

Would there have been articles in the newspapers in the years preceding the “war to end all wars”, speculating on the possible dangers of letting the SAI genie out of its bottle? Would these features have been fundamentally different from much more recent articles that have appeared in the last few days: this one in the Huffington Post speculating on AI and Geopolitics; or this one on the TechRepublic Innovation website, canvassing the views of several AI experts on the recent proposal by the Microsoft CEO of “10 new rules for Artificial Intelligence”. Implicit in all these speculations is that we must be careful that we don’t let loose the monster that might dislodge us from the Eden we have made of our little planet.

Another recent piece appeared in Psychology Today: “The Neuropsychological Impact of Fear-Based Politics” references the distinct cognitive systems that inspired Daniel Kahneman’s famous book, “Thinking, Fast and Slow”. Humans really are “in two minds”, the one driven by instinct, fear and snap judgements; the other slower, more deliberative and a more recent development in our cognitive history.

A behaviour as remarkable for its currency among the political classes as it is absent from the deliberative outputs of “intelligent” machines is the deceptive pandering to fear in the face of wiser counsel from people who actually know what they are talking about. The rewards for fear and ignorance were dreadful 100 years ago, and a happier future will depend as much upon our ability to tame our own base natures as on the admitted necessity of instilling ethics and empathy in SAI.

Bear necessities of augmented intelligence

A favourite joke involves two barefoot guys in the forest who spot a bear off in the distance, but lumbering towards them and closing fast. They have to run for it but one stops first to put on his running shoes. The other laughs and asks if the running shoes are going to enable him to outrun the bear. Finishing his lacing up, the first guys smiles and says: “I don’t need to outrun the bear; I just need to outrun you.”

As to the anticipated challenges of matching the powers of Artificial Intelligence when it supersedes the capacities of what humans can do – when it graduates to “Super” status and leaves humanity floundering in its dust: we may take a lesson from the bear joke and wonder if there is a middle way.

It appears from all the media commentary that we have little idea now as to how to design SAI that will not obliterate us, whether by accident, or through its own malign design, or by some combination possibly exacerbated by the purposeful interventions of thoroughly malign humans. Can we at least get smart enough to do what we cannot yet do, and find a way of programming SAI to help us solve this?

Without getting into the philosophical difficulties of bootstrapping something like intelligence, two things are clear. We must get smarter than we are if we are to solve this particular problem; and brains take a long time to evolve on their own. We need an accelerant strategy, and it will take more than brain-training. Research must proceed more quickly in brain-computer interfaces, nano-biotechnology and neuropharmacology, and in the sciences of gene editing and  deep-brain stimulation. While research into several of these technologies has been driven by cognitive impairments such as movement disorders and the treatment of depression, their capabilities in areas of potential enhancement of cognitive function are attracting greater interest from the scientific and investment communities. It is definitely becoming a bear market.

Our candle flickers in the cosmic night

On the evidence of our own species, it seems that intelligent life cannot develop without evolving a capability for destroying itself, or at least for acquiescing in a decline through obsolescence into oblivion. What is additionally remarkable is how rapid the evolution from dust particle to dust particle can be, notwithstanding the brief sparkle of celestial fire that lights the passage in between.

This shelf life of intelligence needs to be borne in mind when running the Drake Equation on the likelihood of extra-terrestrial intelligent civilisations. It is not enough that they defy the odds to kindle themselves into existence. Their chance of connecting with any similar intelligence elsewhere in the universe will be slight if they cannot manage to stick around for somewhat longer than the mere slight smear of time in which Homo sapiens has illuminated its small corner of one little galaxy.

The huge number of intelligent civilisations that might be calculated on the Drake formulation become somewhat more meagre if they appear and disappear on the timeline that our own civilisation seems determined to follow. A truly universal telescope programmed to detect intelligent life might, over the course of all of time, pick up the flickering in and out of existence of so many thousands of smart civilisations like some vast constellation of fireflies, flaring up in their nanoseconds of existence and flashing amidst the vasty depths of the cosmic wilderness. If all civilisations play out like ours – at the time of writing occupying its perch atop Earthly creation for less than .0001% of our planet’s existence – then the chances of any two co-existing, and co-existing in something like the same galactic neighbourhood, must be small.

And when one ET does reach out successfully to us, what are the odds that its message – once filtered through the interstellar Rosetta Stone we have yet to invent – might say anything more meaningful than “Come quickly!”