Evidence accumulates to the effect that notwithstanding junk food, reality TV and celebrity culture, humanity is actually getting smarter. The Flynn Effect to which this article refers mentions the phenomenon associated with New Zealand-based philosopher James Flynn, who has built a thoughtful and engaging career on his early observation about rising IQs worldwide over the course of the last century.
Flynn’s work distinguishes crystallised intelligence – what we learn over time, and how we apply that learning – from fluid intelligence, which involves abstract thinking and reasoning. Flynn identifies the latter kind of intelligence as the engine room of humankind’s increasing cognitive strength.
Ironic smiles will be inspired in those who recall the famous Atlantic article, “Is Google Making Us Stupid?”, still resonant seven years after it appeared. If our “fluid IQs” are indeed rising, it may be that we are learning to play to our strengths. The irony would be that this is happening as Narrow AI is turning us into stupider versions of machines whose data manipulation and information retrieval skills so far exceed the crystallised cognitive capabilities of our own, attention deficit-disordered brains.
Given Flynn’s inverse correlation of rising IQs to declining belief in God, the question to ask is: if AI evolves from what is programmed into Narrow AI towards the Super AI that will think for itself, will a defining characteristic of mature AI be its atheism?
Swiss researchers based in Geneva and Lausanne have successfully applied a “new computational method” to the study of functional magnetic resonance imaging (fMRI), so sharpening the generally fuzzy images to which neuroscience has been accustomed and enabling the distinction of up to 13 separate, colour-coded neural networks operating at any given time within the human brain.
This is about more than just getting prettier pictures, as distinguishing separate neural networks should improve our understanding of their inter-relationships and the circumstances in which brain disorders emerge. And we can note, in passing, the happy irony that while machine learning specialists have long been interested in how the human brain works, the inspirations can work the other way too: neuroscientists using the power of computers to enhance their study of the brain.
A challenge emerges as more researchers use more powerful computers to develop deeper understandings of the workings of the most complex organ in the known universe: how do we assimilate and build upon this tsunami of data and accumulation of knowledge? Scientists at Stanford University, backed by the US National Science Foundation, have established an initiative called Open fMRI to enable the sharing of knowledge via a computer database that will be available to scientists everywhere.
After only a few months of compiling this library of blogs on matters of human and artificial intelligence, BAM has established its First Law of Smarts in the online world where these matters are discussed. Simply, the amount of intelligence employed in any act of communication will be in inverse proportion to the amount of clickbait on display. It is so ironic that such efforts are made to get the readers to a story, after which small triumph the primary impulse appears to be to distract them away.
This message slammed home with special force in an account of the continuing work of Nobel neuroscientist Tom Südhof in the online publication Scope, produced by Stanford Medicine. Neither the reader nor Dr Südhof himself appear from this article, nor from its companion piece explaining the science behind the Nobel Prize, to be much interested in distraction. And after absorbing this summary of the biology underpinning the most marvellous engine of wisdom in the known universe, anyone would be aghast at how careless we are of the potential for human intelligence.
Our respect for neuroscience, not to mention our regard for anyone in possession of a human brain, would be enhanced by a deeper grasp of the knowledge that Dr Südhof has acquired over three decades of focused investigation in his labs in Texas and California. If the destiny of patient enquiry is wisdom, the fate of all distracted clickbait victims must be the way of all goldfish: stupefaction before the final flush.
A professor at the People’s Liberation Army National Defence University in Beijing wrote an article entitled “As possibility of third world war exists, China needs to be prepared”. Reaction from 20 American experts affiliated with the New America Foundation appeared online under the title “Here’s the Defining National Security Question of Our Time”. Apart from the focus within this commentariat only on American and Chinese sentiment, three key reflections emerge from the exchange:
- More than half of the writers make explicit reference to the cataclysmic potential in digital warfare. Whatever the entertainment or AI industries may see as the future for wars between robots and people, the warfare industry itself sees AI as merely another tool in pursuing the same old game;
- The likelihood of a global kinetic war of the century is much less likely than a series of regional and attritional wars of the decade, waged most often by proxies with a view to systemic de-stabilisation of imagined enemies; and
- Implicit in humanity’s obsession with war is our continuing and irrational faith – in the face of all experience – that we can prosecute through chaos what was conceived in tranquillity.
A beacon amidst the angst is the hope expressed by one of the New Americans that we trade in the War on Terror for a War on Stupidity. If there were an analogue for what may be Artificial about Intelligence, it would surely lie in the cynicism with which otherwise smart people think they might preserve their island of privilege within a planet reduced to cinders. The artificially stupid are smart enough to know better, but cynical enough to think they can behave as if they didn’t know, or care.
In a fascinating article in the journal Nature, four specialists from the world of computer science and robotics address what the article’s sub-title misleading suggests are the “societal risks from intelligent machines”. As these contributors make clear from each of four distinct perspectives, the risks are clearly much less “from intelligent machines” and more from a society that is not ready for them.
In summary, the article concerns the need for more coherent communications and greater transparency in our consideration of the ethics of AI as it moves beyond the research lab and out into the world. Scientists, business people, politicians, and serious thinkers of all stripes need an open dialogue that is focused more on outcomes and the potential for AI to integrate helpfully with humanity, and less on protecting the status quo and the entrenched interests of commerce and power.
The people who know what they are talking about need to minimise the risk of the agenda being seized by those who don’t, however well-meaning those people may be. It will always be the curse of genius that it must spend its life running up and down rich men’s stairs, but the emerging world of intelligence and human/machine symbiosis must be inspired and directed by the champions of the science.
Amidst the babble of Rumsfeld’s famous rumination on known knowns and known unknowns, the missing permutation was the unknown knowns: that is, the things that are known to people who know them, but unknown to people like Rumsfeld. It is vital that those who are engaged with the emerging AI industry master the tools of effective communications and make known what really matters to the rest of us.