Tag Archives: Genetics

Intelligence does not grow in a petri dish

As there are neither agreed rules nor a generally accepted definition of intelligence, nor is there a consensus on what consequences of human or machine behaviour betokens intelligence that is either natural or artificial, it will be difficult to measure the surpassing of any point of singularity when machine intelligence matches and exceeds our own.

It may well prove to be the case that, when we think we have got there, we will have supreme exemplars on both sides of the bio/non-biological intelligence divide asking us if it any longer matters. And as our species approaches the moment of truth that may never obviously arrive, there will be a growing chorus of voices worrying if a bigger question than the definition of intelligence is the definition of the good human, when so much of what we might see as intelligence in its natural state is perverted in the course of action by the festering agency of the seven deadly sins, animated by fear and enabled by ignorance.

Given the wide range of environments within which intelligence can reveal itself, and the vast spectrum of actions and behaviours that emerge within those environments, it may be the very definition of the fool’s errand to attempt an anywhere anytime definition of intelligence itself. We can learn only so much by laboratory-based comparisons of brains and computers, for example, balancing physiological correlations in the one with mechanistic causations in the other.

Glimmerings of clarity emerge only when one intelligent agent is pitched against another in a task-oriented setting, the victory of either one being equated with some sense of intelligence superiority when all that has happened is that an explicit task orientation is better addressed when the parameters of the task can be articulated.

What appears to distinguish human intelligence in the evolutionary sense is the capability to adapt not only in the face of threats and existential fear, but in anticipation of imagined projections of all manner of dangers and terrors. We hone our intelligence in facing down multiple threats; we achieve wisdom by facing down the fear that comes with being human.

Fear is not innate to the machine but it is to us, as Franklin D Roosevelt understood. However machines progress to any singularity, humanity’s best bet lies in understanding how the conquering of fear will enhance our intelligence and our adaptive capabilities to evolve through the singularity, and beyond.

After the election: recovering from cancer and “the other stuff”

Following a newspaper feature by conservative commentator George Will, speculating that Donald Trump might serve in history as the American Republican Party’s “chemotherapy”, an oncologist writing in the online magazine Salon reminded his readers that people with experience of the various treatments available for cancer will recognise that chemotherapy is almost invariably a pretty tough gig.

The idea certainly provides for a thought-provoking metaphor, however, not least as chemo seldom does much good for the cancer’s host, while along the way it ravages the body and soul of the patient every bit as much, and sometimes more so, than it tackles the cancer itself. But perhaps the point of the metaphor is to erect the credible claim that the aftermath of the election, given something like an even modestly clear win for Clinton, will enable the GOP to survive and carry on with the bromide that it was Trump’s noxious temperament that lost it for them: but the policies themselves were sound.

This would be a false premise. Not only are the GOP’s policies, broadly considered, not sound, but they have consolidated their appeal over several decades among a now noticeably declining voter demographic largely comprising of angry and less well-educated white males. But it is not in fact the policies (on either side) that have defined the leit motif of this election as much as the degenerate, juvenile, and poisonous atmosphere that has evolved around any articulation or community discussion of those policies.

Messages have been subverted by the increasing puerility of the mediums, or media, and people have quite simply and very largely been repelled by the whole stinking and thoroughly demeaning process.

In looking beyond the result of the election to its aftermath – potential armed unrest, possible litigation from losing candidates, lingering and truly cancerous rancour eating into the body politic of Washington culture for many years to come – it is useful to stick with the chemotherapy metaphor in examining several key themes that might have attracted the media’s attention over the course of a horribly protracted campaign, but did not. For the fact remains that whatever causes a cancer exists independently of any therapy; the cancer can metastasise; and while the quality of life can become increasingly uncertain as life carries on, the patient is still obliged to “get on with other stuff” as the cancer and/or the treatment progresses.

If today’s news is telling us that “one fifth of cancer patients (in the UK) face workplace discrimination”, what will tomorrow’s news tell us about the future prospects of the American body politic given the enervating drain upon its vitality by two years of a news cycle dominated by one deeply flawed, if not downright tumorous, human being? What is the “other stuff” that the world will have to be getting on with while it deals with the aftermath of the 2016 American election: addressing not only the tumour but the conditions that caused it and the likelihood of any metastasis?

The University of Cambridge’s Centre for the Study of Existential Risk (CSER) has a page on its website devoted to what it sees as the major risks to humanity, defined as such by their threat to the future of the human species and categorised in four broad groupings. Only one of these categories might with some generosity be seen as having been addressed over the course of the election campaign.

It would still be something of a stretch of the imagination to articulate how Donald Trump established any policy position on “Systemic risks and fragile networks”, which CSER defines as the tensions emerging between growing populations, pressures on resources and supply chains, and the technologies that are arising to address the challenges of a global eco-system increasingly defined by its interconnectedness. Trump would point out the systemic shortcomings of American trade negotiators historically unburdened by his vision and experience. As the candidate who actually possesses knowledge and experience of the nuances in balancing risk and reward in this area, Hillary Clinton at least had her views constantly at the ready whenever the media tired of asking her about her emails.

Heading the CSER list of existential risks and often cited by scientists, futurists and politicians as the greatest risk now afflicting the planet is what CSER terms “Extreme risk and the global environment” – known colloquially as climate change. Whatever the consensus among people who actually know what they are talking about, a significant proportion of the broader American public is disinclined to recognise that this problem even exists. The tools of evidence and critical thinking being largely Greek to this wider population, the American media clearly felt the whole subject to be too recondite to be engaging with the science-deniers in a language they couldn’t understand. Trump certainly couldn’t, and the media largely gave him a pass on this.

Most remarkably, the other two categories of risk on the CSER website were virtual no-go areas for both presidential campaigns and the media whose task it might have been to interrogate them if only they had the slightest inkling of the exponential pace of change that will define humanity’s progress in the coming years of the 45th president’s term of office. At some stretch, consideration of the “Risks from biology: natural and engineered” might be seen to feature in the work of Planned Parenthood and its vital work in the areas of female public health and human reproduction. But here Trump was in thrall to the fruitcake wing of the Republican party and, as this was one of the few areas in which candidate and party were in lockstep agreement, he was happy to blunder into embarrassing policy positions that were consistently and constantly undermined by Clinton’s expertise, her experience, her commitment to the cause and, finally, to simple and understandable gender solidarity.

Given the gap between the candidates on issues of female biology – not to mention the publicity given to Trump’s history of obsession with female sexuality stopping well short of the time that reproduction becomes an issue – this was possibly the area of policy discussion that has left the progressive media nonplussed that this election could ever have been run so close. In any case, the wider issues of existential risk and benefits relating to genetics, synthetic biology, global pandemics, and antibiotic resistance scarcely got a look-in over the course of the election’s somewhat onanistic “news cycle”.

Most tellingly, Artificial Intelligence hasn’t featured very much at all in this election. This is especially alarming given the final summary sentence on the CSER website section that addresses this particular area of risk: “With the level of power, autonomy, and generality of AI expected to increase in coming years and decades, forward planning and research to avoid unexpected catastrophic consequences is essential.” The silence has been deafening.

For all the speculation about what so-called Super Artificial Intelligence may mean some decades hence at the point of the “Singularity” — the thought-experimented point where machine/computer intelligence matches and then exponentially speeds past the capabilities of human intelligence — the real story now, in 2016, is almost as startling as it is inspiring.

In this year when “human” intelligence is grappling feverishly with a presidential choice between one candidate who has been careless with her email and another who is a self-professed sex pest and the most dangerous sort of conman (simultaneously large on attitude but bereft of a clue), this year alone has seen considerable advances in the capabilities of Artificial Intelligence, both for worse and for better.

The downsides include the possible misuse of private and commercial data, the increasing potential for fraud, and the threat of AI-directed/autonomous weapons systems. The upsides include faster and more efficient medical research, advances in virtual and augmented reality, safer cities through self-driving vehicles and infinitely more detailed intelligence-gathering on the workings of biology, chemistry, physics, and cosmology. In short, the wider universe is opening before our wondering eyes.

What is worrying amidst this quickening pace of AI technology is that the sort of circumspection we see articulated in media articles like this recent piece in TechCrunch is, first of all, not being reflected in wider public discussions incited by the American election. Second, there is no evidence that more frequent calls for ethical reflection on the challenges of AI might see progress in the ethical sphere keeping pace with developments in the AI science. This prompts at least three pretty obvious questions.

On the longer time horizon, as we contemplate a possible Singularity, what do we imagine that an emerging and self-conscious SuperAI might make of its human progenitor? If we have filled the intervening decades with steadfast ignoring of our existential threats, ever complacent about the real and enduring achievements of human imagination, and yet determined to elect our future leaders according to the bottom-feeding precedents suppurating forth from this week’s debasement of democracy, could any intelligence – human or “artificial” – be surprised if the post-Singularity machine should decide that man and monkey might share a cage?

In the medium-term, we might galvanise an appropriate response to the above paragraph by imagining what progress we might make over the next four years, given what has happened just over the course of 2016. Will the wise heads of 2020 be looking at that year’s American election in prospect and wondering how much more deliberation will be inspired by the questions so woefully ignored this year?

Specifically, will humanity have come to grips with the technological and ethical issues associated with the increasing pace of AI development, and craft their appreciation of that year’s slate of candidates on the basis of more intelligent policy positions on support for technology, for education in the sciences and in the absolute necessity for our species to evolve beyond its biases and primal fears in the application of critical thinking and greater circumspection as we prise open a deeper understanding of our relationship with the cosmos we look set to join?

Which brings us to the short-term question: if we are to attain the promontories of wisdom implicit in addressing those challenges of the medium term, what do we have to start doing next week, next month, and throughout 2017? If we are to overcome the toxic and cancerous experiences of 2016, what are the fundamentals among “the other stuff” that we will need to address? What must we do to ensure that 2020 finds us clear-sighted and focused on the distant Singularity as a point of possibly quantum enhancement of human potential, rather than a humiliating relegation to the monkey cage?

By no means a comprehensive wish-list, or even sufficient in themselves for guaranteeing the progress of our species to that quantum gate, these twin areas of focus are proposed as at least being necessary areas for reflection given the impact of their collective absence over these last unnecessarily anxious and ghastly 18 months.

First, keep it real: celebrate intelligence. We must not surrender to the pornography of simulation. Cyberspace has echoed with the cries of commentators decrying the ascendance of reality television over the dynamics of real life. The CBS CEO who admitted that the Trump campaign may not be good for America, but is “damn good for CBS” might prompt a useful debate on what the media are for. And he would not say it if people were not watching him, so another debate is necessary on how to encourage people to keep up with scientific progress as much as they keep up with the Kardashians. We need more public respect for science and for the primacy of evidence; and less indulgence of bias and political determinations driven by faith.

And as a sub-text to the promotion of intelligence, the organisers of presidential campaigns might reflect upon their role as custodians of the democratic process when they consider how best, and for how long, the 2020 campaign might proceed. Is an adversarial and protracted bear-baiting marathon an optimal way of revealing the candidates’ strengths and educating the public, or is it okay that it’s deemed to be damn good for the boss of CBS?

Finally, the American Republican Party is in need of a re-boot. To finish where we set out with a thought for what might be good for what ails it if trumped up chemotherapy should fail: are they clear on their voter demographic’s direction of travel for the next four years, given what’s going on in the world? This same question applies to any government that would profit from enduring xenophobia or from exploiting atavistic bias and resolute ignorance. There is only so much to be gained by gerrymandering and pandering to inchoate fears, and no credit at all in impugning any authority to which the cynical seeks election.

And there is absolutely no glory in taking countries back, or “making them great again”. Humanity reaches out, it moves forward, and looks up.

Humanity on the cusp of enhancement revolution

An article from the Pew Research Center takes a long look at the subject of Human Enhancement, reviewing the “scientific and ethical dimensions of striving for perfection.” The theme of transhumanism is getting a lot of media attention these days, and it was no surprise when Nautilus weighed in, capturing the ambivalence over aging in one issue three months ago when explaining “why aging isn’t inevitable”, while another article in the same issue argued that it is physics, not biology, that makes aging inevitable because “nanoscale thermal physics guarantees our decline, no matter how many diseases we cure.” Hmmm . . .

Taking another perspective, a third Nautilus feature speculated on the old “forget adding years to life, focus on adding life to years” chestnut, asking if the concept of “morbidity compression” might mean that 90 will “become the new 60.”

On the day that this year’s Olympics kick off in Brazil, we can conclude our round-up of key articles with a fascinating contribution to the enhancement debate by Peter Diamandis of SingularityHUB, speculating on what Olympian competition might be like in the future “when enhancement is the norm.” And it is this last headline link that brings into sharp focus the major point on which most media commentaries on enhancement agree: the key word is “norm”.

Enhancement is in the natural order of things and never really manifests itself as a choice so long as it remains evolutionary: that is, moving so slowly that nobody much notices it. When change explodes with such momentum that nobody can fail to notice it, it begins to settle into being a new normal. And as Diamandis concludes his extended thought experiment on what is happening with a quick spin through synthetic biology, genomics, robotics, prosthetics, brain-computer-interfaces, augmented reality and artificial intelligence, he concludes almost plaintively:

“We’ve (kind of) already started . . .”

As indeed we have. In today’s Olympian moment we can note that whether or not human enhancement was part of “God’s plan” (as per the weakest section of the Pew article) the idea of Faster, Higher, Stronger certainly figured in the plans of Baron de Coubertin. Now, can this also mean Smarter? Left hanging in the otherwise excellent Pew piece is the question if a “better brain” might enable a better mind or, at least, a higher capacity for clearer and more creative thinking. Can we move our thinking up a gear?

Could medical research be more adventurous?

An article last month on the Nautilus website posed an interesting question: “Why is Biomedical Research So Conservative?” Possible answers were summed up in the sub-headline: “Funding, incentives, and scepticism of theory make some scientists play it safe.” This is not to say that there is insufficient imagination going into the research application of advances in machine learning and data management in improving health outcomes. In fact, another article came out about the same time on the Fast Company website, discussing “How Artificial Intelligence is Bringing us Smarter Medicine”. It distinguished a host of impressive advances under five significant headings: drug creation, diagnosing disease, medication management, health assistance, and precision medicine.

People could say ah yes: well, that’s machine smarts at the applied rather than the theoretical end of medical research, although that is less true in the first and last of these categories: supercomputers are very much engaged in the analysis of molecular structures from which it is hoped new therapies will emerge, and in the vast data sets which are being created within the science of genomics as we move into a new era of precision, personalised medicine.

Nevertheless, it does seem that pure research in biology – as distinct from physics – has been playing it comparatively safe, and the Nautilus article provides the evidence for its ruminations. Natural language processing analysis of no fewer than two million research papers, and 300,000 patents arising from work with 30,000 molecules, showed a remarkable bias towards conditions common in the developed world and with an emphasis on areas where the research roads are already well travelled – predominantly in cancer and heart disease.

My own communications work in the dementia environment – specifically Alzheimer’s disease – suggests that another reason may be in play. Where medical research has been less conservative, more adventurous, and broadly more successful, there has been more collaboration and shared excitement around a commonly perceived mission. The more open-source, zealous, and entrepreneurial eco-system that has applied for decades in heart and in cancer research – and we see this now on steroids in the field of artificial intelligence developments – has yet to capture the imagination of the wider biomedical community, where the approach remains generally more careful, more academic and inward-looking: just, more conservative.

Bear necessities of augmented intelligence

A favourite joke involves two barefoot guys in the forest who spot a bear off in the distance, but lumbering towards them and closing fast. They have to run for it but one stops first to put on his running shoes. The other laughs and asks if the running shoes are going to enable him to outrun the bear. Finishing his lacing up, the first guys smiles and says: “I don’t need to outrun the bear; I just need to outrun you.”

As to the anticipated challenges of matching the powers of Artificial Intelligence when it supersedes the capacities of what humans can do – when it graduates to “Super” status and leaves humanity floundering in its dust: we may take a lesson from the bear joke and wonder if there is a middle way.

It appears from all the media commentary that we have little idea now as to how to design SAI that will not obliterate us, whether by accident, or through its own malign design, or by some combination possibly exacerbated by the purposeful interventions of thoroughly malign humans. Can we at least get smart enough to do what we cannot yet do, and find a way of programming SAI to help us solve this?

Without getting into the philosophical difficulties of bootstrapping something like intelligence, two things are clear. We must get smarter than we are if we are to solve this particular problem; and brains take a long time to evolve on their own. We need an accelerant strategy, and it will take more than brain-training. Research must proceed more quickly in brain-computer interfaces, nano-biotechnology and neuropharmacology, and in the sciences of gene editing and  deep-brain stimulation. While research into several of these technologies has been driven by cognitive impairments such as movement disorders and the treatment of depression, their capabilities in areas of potential enhancement of cognitive function are attracting greater interest from the scientific and investment communities. It is definitely becoming a bear market.

Where is the soul of your identity?

Who is this guy? Google tells me that he’s a Dutch master and that, with a reputation burnished with more than three centuries of wondrous respect, he is possibly The Dutch Master. At 63, he died relatively young – as I would be bound to say as I would have died six months ago had I been limited to his span. A popular comment among his biographers is that while he died a poor man, his passing scarcely marked, he has achieved immortality through his work. And now a story breaking this week tells us that a marriage of empathetic art curation and deep learning expertise has created a new painting in the signature style of Rembrandt himself.

In the week that another copy of the Shakespeare First Folio has been uncovered, we have been given plenty of grist to one particular mill that does not generally feature large in humanity’s reflections on immortality and the potential in the future intelligence of our species. What is the identity of any immortalised intelligence?

When we think about living forever, what is it precisely that is living, and for whom does it live? With the genius whose mind has been uploaded to the cloud, or whose remains have been cryogenically suspended and then restored, or whose DNA has enabled the cloning of the Spawn of Genius, who and for whom does this spawn exist? And how credible are its creative outputs? This is no small question. If we can imagine the painting that could be realised by Rembrandt’s revivified remains, and hold that up to the painting celebrated in this week’s story, in which artefact does the genius of The Dutch Master more authentically survive?

Could genius be genetically engineered?

A fascinating discussion is bound to develop around this topic when your chosen panel includes anthropologist/psychologist Stephen Pinker and two professorial colleagues such as Dalton Conley (social sciences) and Stephen Hsu (theoretical physics). The resulting video, filmed last week at New York City’s 92nd Street Y, would have been more streamlined than its one hour+ if the moderator had actually moderated rather than using the panel as a skittles run for his own theorising, but some key reflections couldn’t help bursting through.

Even the specialist panellists have been surprised at the pace of the science over the last few years, and at the resulting, wider accessibility of gene editing technology as the knowledge spreads and costs plummet. On the basis that nature has won any nature v nurture debate and that any feature of a living organism that has a genetic foundation can be genetically engineered to enhance or diminish the effect of the gene or genes, it would appear that the answer to the headline question will be yes, even if it is not happening yet. More pressing than the “could it” question will be those questions more related to ethics and values than to data and science. These include “will it?”, “by whom?”, and “to what purpose?”

How we manipulate both nature and nurture in enhancing intelligence is familiar to people who make choices of smarter mates, better schools and healthier lifestyles. In considering the viability of genetic engineering as just another choice to be made, we will be considering the same filters of benefit over risk, fairness, accessibility, and the opportunity cost to our species of not taking a significant opportunity if and when it presents itself. Would we hobble ourselves if the price were our own extinction?

Gene editing research scores low in American poll

STAT News, “reporting from the frontiers of health and medicine”, leads today on a survey it conducted in conjunction with Harvard’s Chan School of Public Health. It seems that the American public is split on whether public funding should be made available to support research into gene therapy. The balance of opinion moves strongly away from 50/50 towards no when the science involves unborn babies – even where such research aims to eliminate disease. Support for gene editing research dwindles still further where the research is engaged in what the article describes as “more frivolous” pursuits, such as working to improve a baby’s intelligence.

Continue reading Gene editing research scores low in American poll