“Why Cryonics Makes Sense” obviously made sense to Tim Urban of WaitButWhy, and he makes a decent case for why it ought to make sense for the rest of us, too. Why suffer the shuffling off of the mortal coil if we can develop the technology to revivify the coil? Like all Urban posts, this one is witty, thoughtful and well-researched, but there is a strong sense that cryonics – the business of preserving people after death ahead of their future restoration to full health when the science is up to it– is simply a technical challenge rather than a starting line for some fundamental metaphysical probings on the meaning of human identity.
On a purely technical level, it certainly seems plausible that something will lurch forth out of the dry ice, just as the uploaded brains of the Martine Rothblatts and Ray Kurzweills of the world will enable downloads of something possibly reminiscent, to anyone around who might care to reminisce, of the original models. But to all of these people the question must be posed: just who precisely do they think they are, going off on all these assumptions as to who they might be when they wake up, or come off of their clouds? And this question is posed in the strictly literal and non-adversarial spirit of: “Really, folks: what precisely is it that makes you you?”
Of all the illusions and biases that shade and nuance human existence, perhaps the greatest is implicit in the declaration: “it is what it is”. Can eight billion people, squeezing their way together through time and space in a multiverse of fadings and becomings, somehow exist so independently of one another as to be each of them their own little “it”, so sure of their place in time as to inhabit their own little “is”? Might this seem from any point of cosmic objectivity to be a bit egocentric?
But surely not as egocentric as to suppose further that any one “it” that “is” might take a time out and rejoin life at some later date, by which time all those billions of its have been reshuffled beyond all recognition since that long-ago moment when our prodigal embarked upon a cryonic snooze, or was uploaded into the cybermist.
Against the vast sweep of evolutionary time, the mere c8,000 human generations through which our species has evolved will not have seen a huge change in the interplay of physics, chemistry and biology that distinguish the functioning of the human brain. Our first old grandpa standing on his ancient plain will have gazed at the moon with something pretty close to the neural equipment we now possess. His first thoughts will have been for the importance of moonlight in betraying the predator or illuminating the water hole.
Later on, in appreciating the moon’s significance, his anthropomorphising imagination would divine a personality to whom appropriate propitiations might be directed to ensure a healthy crop – or so he would imagine. It would be another 7,998 generations before that same brain would figure out to put us on the moon that instigated all this lunar thinking in the first place.
Brain scientists at Carnegie Mellon University have moved us closer to grasping how it is that old brains can surpass old dogs in learning new tricks. Enhanced scanning technologies have enabled researchers at the university’s Scientific Imaging and Brain Research Center to determine how specific areas of the brain, once attuned to meeting the challenges of survival, can now be identified as the same areas that grasp the principles of advanced physics. It appears, for example, that the same neural systems “that process rhythmic periodicity when hearing a horse gallop also support the understanding of wave concepts” – wavelengths, sound and radio waves – in physics. It is thought that such knowledge will improve the teaching of science.
The CMU team are by no means the only scientists pursuing this line of enquiry. A selection of stories gleaned from just a few days’ of monitoring the brain science media reveal one study that “finds where you lost your train of thought”; another that believes it can pinpoint where personality resides in the brain (that would be the frontoparietal network); and yet another that can distinguish you by reading your brainwaves with no less than 100% accuracy. And this is all before we get into the vast phantasmagoria of scientific explorations of the human brain on magic mushrooms, ecstasy and LSD. And we will leave music for another day.
For anyone worried about our fixation on the adversarial trope of man v machine, two articles appear on the same day, promoting convergence over conflict.
First, “A key to the human brain” on the BCS website is both refreshing and instructive. Described as the “first of four articles on the implications of the convergence of computing, biogenetics and cognitive neuroscience”, it replaces oppositional thinking with reflections on brain-computer convergence, believing that “human and artificial intelligence and their relation to the genetic code offer enormous opportunities for new research and innovation.” Whether or not the end game of all this is “prosthetic brains” will leave some people wondering if this is really possible, and no doubt as many wondering if this is what humanity truly needs.
Second, a feature on the TechCrunch website reflects on “The era of AI-human hybrid intelligence”. It is less technical and richer in its linkings, and it offers some helpful thoughts on intelligence as it actually expresses itself. Natural language generation technology is a clearly wonderful thing but for the foreseeable future it will need augmenting with human curation skills to understand the subtle differences of nuance that exist between what is expressed and what is comprehended.
For the time being, we wonder if brains are being constrained to act more like computers (the slaves-to-technology school of thought) or if computers will be best improved by being modelled on the human brain. With articles such as these, our thinking is evolving beyond the choices implicit in A versus B, towards reflections on the potential in symbiosis and convergence. When synthesised intelligence becomes the objective, we move onwards as a species to considering A plus B.
In time to come, we will face a gigantic questioning of our identity as a species: is our goal to be the best we can be, and not just “faster, higher, stronger”? Does the focus change if we amend the Olympic motto to “faster, higher, stronger, smarter”? Do we aim to be as smart as we can be, or possibly higher as being as intelligent as smart-without-limits may allow? And which choice offers better odds on survival?
On the surface, a weekend book review in the Financial Times reads like another gentle spin through today’s fashionable view that language is constantly in flux and the pedants who would presume to lecture us on correct usage need putting back in their box. If “most people” choose to say that black means white, then that’s that. In the face of dictats from the (customarily self-appointed) style Nazis, the proper response is a chorus of raspberries, middle fingers and whatevs, innit.
What may have been missed in Rebecca Gowers’ “Horrible Words: A Guide to the Misuse of English”, and was certainly absent in the FT review, is the extent to which language is a reflection of how its user thinks and feels. Scaled up to community level and played out over time, it becomes the aural heartbeat of an evolving culture. The choices made in speaking reflect the thinking process, and are often political, as anyone who has “put their foot in it” will have learned the hard way. And those who have learned, say, the difference between imply and infer will have learned something vital about the passage of meaning between speaker and listener. Explaining what they understand does not make them self-appointed style Nazis.
This is more than a trivial detour down the byways of nominative relativism: anyone more fastidious than I am about language is a pedant; those less so, barbarians. We will all feel the point of the principle of standards if the day comes when software developers make and monitor the rules. When the robots stand to humans as did the British army mapmakers to the rural Irish in Brian Friel’s great Translations, we may then better understand through loss what we had when we didn’t know its value.
Who is this guy? Google tells me that he’s a Dutch master and that, with a reputation burnished with more than three centuries of wondrous respect, he is possibly The Dutch Master. At 63, he died relatively young – as I would be bound to say as I would have died six months ago had I been limited to his span. A popular comment among his biographers is that while he died a poor man, his passing scarcely marked, he has achieved immortality through his work. And now a story breaking this week tells us that a marriage of empathetic art curation and deep learning expertise has created a new painting in the signature style of Rembrandt himself.
In the week that another copy of the Shakespeare First Folio has been uncovered, we have been given plenty of grist to one particular mill that does not generally feature large in humanity’s reflections on immortality and the potential in the future intelligence of our species. What is the identity of any immortalised intelligence?
When we think about living forever, what is it precisely that is living, and for whom does it live? With the genius whose mind has been uploaded to the cloud, or whose remains have been cryogenically suspended and then restored, or whose DNA has enabled the cloning of the Spawn of Genius, who and for whom does this spawn exist? And how credible are its creative outputs? This is no small question. If we can imagine the painting that could be realised by Rembrandt’s revivified remains, and hold that up to the painting celebrated in this week’s story, in which artefact does the genius of The Dutch Master more authentically survive?
Much of the commentary on the overlap between robots and women is depressingly reflective of male needs and sensitivities, and what all of this might mean for men. It can seem endless, from all the commentary on ExMachina and its ilk, through tabloid wails about “bonkbots” that turn men into misogynist monsters; and on to the frenzy that provoked Microsoft’s recent TayBot debacle and inspired one of our more recent blogs here on BAM!
One of the more thoughtful articles on this topic is this review on Quartz. Against the unsettling statistic that women achieving American undergraduate degrees in computer science tops out around 20%, the article contains a series of reflective comments from a few of the women who have made it. Where more humanistic objectives are undervalued in favour of technocratic, adversarial ones (“My robot can whup your robot . . .”) it is not surprising that the public perception grows that the future of AI is not about service benefits but about the risks of robots run amok. In the face of these more apocalyptic scenarios, the Quartz review offers a heartening summary of projects being directed by women working with kinder, gentler robots.
Last week saw the publication of a sobering article in The Guardian, assuring us that “the tech industry wants to use women’s voices . . .” (i.e. for the Siris and Cortanas of the world) “. . . but “they just won’t listen to them” (i.e. in working out how Siri et al will respond to real-life queries about, say, sexual assault). And so it will continue as long as “she” wants to be listened to by people asking questions that “she” does not understand. “Her” outputs clearly need more female inputs.