The Qinetiq MAARS (Modular Advanced Armed Robotic System), an unmanned ground vehicle for reconnaissance, surveillance and target acquisition in battle, currently under development.
World’s tech leaders urge UN to ban killer robots
Wilson Da Silva
An open letter by 116 tech leaders from 26 countries urges the United Nations against opening the ‘Pandora’s box’ of lethal robot weapons.
A key organiser of the letter, Toby Walsh, Scientia Professor of Artificial Intelligence at UNSW, released it at the opening of the International Joint Conference on Artificial Intelligence (IJCAI 2017) in Melbourne, the world’s pre-eminent gathering of top experts in artificial intelligence (AI) and robotics. Walsh is a member of the IJCAI 2017’s conference committee.
The open letter is the first time that AI and robotics companies have taken a joint stance on the issue. Previously, only a single company, Canada’s Clearpath Robotics, had formally called for a ban on lethal autonomous weapons.
In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal discussions on autonomous weapons. Of these, 19 have already called for an outright ban.
Toby Walsh, Scientia Professor of Artificial Intelligence at UNSW, is one the key organisers of the open letter. Photo: Grant Turner/Mediakoo
“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter states. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.
“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close,” it states, concluding with an urgent plea for the UN “to find a way to protect us all from these dangers.”
The MQ-9 Reaper drone is one of the lethal weapons whose operations in battle could easily be automated.
Signatories of the 2017 letter include:
* Elon Musk, founder of Tesla, SpaceX and OpenAI (USA)
* Mustafa Suleyman, founder and Head of Applied AI at Google’s DeepMind (UK)
* Esben Østergaard, founder & CTO of Universal Robotics (Denmark)
* Jerome Monceaux, founder of Aldebaran Robotics, makers of Nao and Pepper robots (France)
* Jürgen Schmidhuber, leading deep learning expert and founder of Nnaisense (Switzerland)
* Yoshua Bengio, leading deep learning expert and founder of Element AI (Canada)
Their companies employ tens of thousands of researchers, roboticists and engineers, are worth billions of dollars and cover the globe.
Walsh is one of the organisers of the 2017 letter, as well as an earlier letter released in 2015 at the IJCAI conference in Buenos Aires, which warned of the dangers of autonomous weapons. The 2015 letter was signed by thousands of researchers in AI and robotics working in universities and research labs around the world, and was endorsed by British physicist Stephen Hawking, Apple Co-founder Steve Wozniak and cognitive scientist Noam Chomsky, among others.
“Nearly every technology can be used for good and bad, and artificial intelligence is no different,” said Walsh. “It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialise war.
“We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organisations for an UN ban on such weapons, similar to bans on chemical and other weapons,” he added.
“Two years ago at this same conference, we released an open letter signed by thousands of researchers working in AI and robotics calling for such a ban. This helped push this issue up the agenda at the United Nations and begin formal talks. I am hopeful that this new letter, adding the support of the AI and robotics industry, will add urgency to the discussions at the UN that should have started today.”
Ryan Gariepy, founder & CTO of Clearpath Robotics, was the first to sign the open letter: “The number of prominent companies and individuals who have signed this letter reinforces our warning that this is not a hypothetical scenario, but a very real, very pressing concern which needs immediate action,” he said.
The Raytheon Phalanx Close-In Weapon System used by the Australian Navy, which automatically identifies and destroys incoming missiles. While purely defensive, it’s an example of an autonomous weapon that could be adapted for lethal use.
“We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” he added. “The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale.”
Yoshua Bengio, founder of Element AI and a leading ‘deep learning’ expert, said: “I signed the open letter because the use of AI in autonomous weapons hurts my sense of ethics, would be likely to lead to a very dangerous escalation, because it would hurt the further development of AI’s good applications, and because it is a matter that needs to be handled by the international community, similarly to what has been done in the past for some other morally wrong weapons (biological, chemical, nuclear).”
Stuart Russell, founder and Vice-President of Bayesian Logic, agreed: “Unless people want to see new weapons of mass destruction – in the form of vast swarms of lethal microdrones – spreading around the world, it’s imperative to step up and support the United Nations’ efforts to create a treaty banning lethal autonomous weapons. This is vital for national and international security.”
source: The University of New South Wales