A professor at the People’s Liberation Army National Defence University in Beijing wrote an article entitled “As possibility of third world war exists, China needs to be prepared”. Reaction from 20 American experts affiliated with the New America Foundation appeared online under the title “Here’s the Defining National Security Question of Our Time”. Apart from the focus within this commentariat only on American and Chinese sentiment, three key reflections emerge from the exchange:
- More than half of the writers make explicit reference to the cataclysmic potential in digital warfare. Whatever the entertainment or AI industries may see as the future for wars between robots and people, the warfare industry itself sees AI as merely another tool in pursuing the same old game;
- The likelihood of a global kinetic war of the century is much less likely than a series of regional and attritional wars of the decade, waged most often by proxies with a view to systemic de-stabilisation of imagined enemies; and
- Implicit in humanity’s obsession with war is our continuing and irrational faith – in the face of all experience – that we can prosecute through chaos what was conceived in tranquillity.
A beacon amidst the angst is the hope expressed by one of the New Americans that we trade in the War on Terror for a War on Stupidity. If there were an analogue for what may be Artificial about Intelligence, it would surely lie in the cynicism with which otherwise smart people think they might preserve their island of privilege within a planet reduced to cinders. The artificially stupid are smart enough to know better, but cynical enough to think they can behave as if they didn’t know, or care.
Lively Stuff from Planet BAM!
- Useful primer to AI, its background and future, with focus more on utility than threat
Presented like click-bait, but particularly good on AI investment growth over five years
- Another good introduction: this one to Transhumanism, and rich in hyperlinks
and prompting key ethical questions on setting and crossing the bar
- Philosophical review of the risks and challenges implicit in regulating AI systems
and a link that mentions “differential tort liability”: i.e., who pays for consequent pain?