Plan for anything in war, get chaos

A key issue discussed at the April conference in Geneva on Lethal Autonomous Weapons Systems (LAWS – irony alert) was the definitional question of “autonomy”. Those in favour of maximum autonomy for AI-driven weapons systems are the same sort of people who have always been beguiled by the boasts of technology, talking of precision bombing and surgical strikes as if the clinical talk of success achieved in lab conditions can be replicated within the chaotic terror of a warzone.

Consider the insane absurdity of bringing the phenomena of war and Artificial Intelligence into the same sentence. After all, if we worry about the potential of robots that could kill, why would we be designing robots to do precisely that and then get all long-faced serious about whether this killing should be inhibited in the least by human oversight? It’s not even as if such oversight would guarantee much of anything, as it is humans that will have created that warzone in the first place and then put the robots into it. History offers some hope, however.

An interesting precedent in considering the benefits of a little human oversight was the moment in 1962 during the Cuban missile crisis when Vasily Arkhipov, a Russian submariner, stood up to his colleagues and the demands of operational protocol and refused to launch the ballistic missile that could well have provoked a nuclear exchange and possibly World War III. Had the Soviet B-59 submarine been operating autonomously as we all too complacently define the term now, it is easy to see how catastrophically different the intervening half century might have been.

Lively Stuff from Planet BAM!

Leave a Reply

Your email address will not be published. Required fields are marked *