We owe the technological definition of “The Singularity” – and in this context the definite article really matters – to a loose collaboration of great minds, going back at least as far as mathematician John von Neumann more than half a century ago through to futurist Ray Kurzweil far more recently. While there is no single, set-in-marble definition (and this entertaining blogpost posits no fewer than 17 of them), there does appear to be a general acceptance – implicit in the very choice of the word “singularity”, that there will be a single moment in time when a critical line is crossed. The state of what is will pass into what was; and what is inevitable about the future will become the central definition of the new present.
It is clear how a human intelligence would see the phenomenon in this way, and choose a word like this to describe it. Daily life is perceived as a linear stream of events and, when someone is finished reading about quantum physics, parallel universes, space/time and the rest of it, he still goes out to catch his bus in the rain, waits for the bus, gets on the bus and begins drying out. And in contemplating The Singularity, he apprehends a world in which human intelligence exceeds machine intelligence in its capacity: until comes that Singular moment after which the capacity of machine intelligence is the greater.
What does this mean for the poet’s man in the street who is just walking dully along? Perhaps the people who are calibrating this momentous shift will notice the point being reached and passed, but the implications of this shift are already having momentous effects, and the impact of the tsunami will one day subsume even our dull pedestrian.
A new book by Dr Melvin Konner, entitled “Women After All”, makes a compelling case from his study of biology and evolution that the days of male supremacy are numbered. Insofar as humanity’s evolution was always going to proceed through a period that granted primacy to brute strength, men were always going to have a headstart over women. While the tide has yet to turn in company boardrooms and in engineering school class intakes, it does appear that women have caught up.
Nowadays, the thinking goes, we can afford to rely less upon smacking people and more on consensual problem-solving and emotional intelligence, with the result that while men might continue to commit more acts of violence, have more traffic accidents, and brag a whole lot more, women can get on with running the place. And authors like this one from this morning’s BAM! feed speculate on the consequences of men’s increasingly irrelevant biology: society may be better off without men’s having a place at the table at all.
Let’s pause and re-set here. Speaking not as a man speaking up for men, but mindful of the emerging potential of intelligences that are not human, I would ask if the biological determinism which has increasingly side-lined the significance of the male contribution to procreation could evolve further to rendering females redundant too.
Women of the world: beware the spectre of the petri dish. Science will pursue conception so immaculate as to render all the traditional agents redundant: the divine and the male certainly and almost already, and the female too in their turn – and not from any impulse to find a cure for morning sickness.
The BAM! feeds* have been particularly full over the past couple of weeks with speculation on the possibility – some would say certainty – that artificial or machine intelligence will soon outstrip our own, home-grown brainbox intelligence. We poor humans will be left behind, rendered mere puppy slaves to the robots we first hired to clean our houses, mindless to the threat they would pose when they took over those houses and put us out in the shed. And then they might kill us.
The worry is stark enough: we might control AI so long as it is we who control the upgrades. But what happens when AI’s computational and processing powers become so great that the robots can upgrade themselves, evolving to a point that they are making up on a million years of our human neurological evolution in mere minutes?
Two immediate thoughts occur, admittedly to this one human brainbox. First, we must be careful of anthropomorphic projection here. We know about our own behaviour towards less intelligent species, and about our dodgy record as stewards of our planet. We naturally assume that as we have done unto others, so they will do unto us if they get the chance. But there is no evidence that a hallmark of the Artificial Intelligence we contemplate will be a desire to kill us. Seeing how it might be so is not evidence that it will be so.
The second thought comes when we type “number of countries developing military AI” into Google and turn up 123 million results in less than a second, including this cheery little website for the International Committee for Robot Arms Control. It’s over a year since computer scientists from 37 countries signified the real danger: not AI running amok, but killer humans programming robots to kill humans.
Current speculations on the likelihood that Google Search might get closer to the truth with algorithms based upon reliable authority, rather than upon sheer weight of numbers, have prompted tsunamis of commentary on a wide range of fascinating questions. Who determines who’s an expert, and can even experts be affected by cognitive bias, or by conflicts of interest, or just the occasional perverse dyspepsia? At the other end of the human cognitive spectrum, can even the most risible creationist be adjudged less ignorant because one million of them post a “Like” on facebook in support of a theory about God’s working week?
On the one hand, supporters of crowd-sourced answers can point to one critical and undeniable fact about their approach: nobody is so innumerate that they cannot spot the difference between the number one, and a crowd. Supporters of truth-based authority – what matters is simply what is demonstrably true – will always face accusations of bias, or undeclared nuances in interpretation. And it will take some pretty clever algorithms to accommodate those shades of grey.
On the other hand, a little scepticism goes a long way when applied to the definition of the “wisdom” that is imputed to the crowd. While it is true that everyone has a right to their opinion, this does not make their opinions right; and those opinions don’t become any more – or indeed less – right for having been cooked up in a communal kitchen. We can pause to smile at the likelihood that any of the famous chefs we know might pause in the heat of service to accommodate the rightfully held opinions of their sous chefs . . .
While neither the “wisdom of crowds” nor the “wisdom of experts” can be held up as infallible recipes for delivering the truth, we might ask ourselves which is more likely: that someone should inch closer to the truth by finding someone else who agrees with them; or that they are more likely to achieve success by learning something and thinking somewhat about the subject under review.
Applying the same principle of thinking to our species and its unsteady progress out of the swamps and jungles of our brutal ignorance, we can ask if it is an accident of history that faith-based reasoning preceded the Enlightenment and its commitment to wisdom via the principles of disciplined enquiry. Could it just as easily have happened that science came first, to be replaced by the myths of our holy books, myths still so widely held that people who express doubts in their wisdom can still be threatened with death?
Algorithms developed to avoid nonsense and promote informed wisdom may not make us gods in our universe, but the sum total of our intelligence as a species is more likely to have been increased.