Thursday, August 06, 2020

Why we must take the risk from robot weapons seriously

May 10. 2018
Facebook Twitter

By Bharat Dogra

One of the greatest risks of the next few decades is likely to come from robotic weapons, AI (artificial intelligence) weapons, or lethal autonomous weapons (LAWs).

In August 2017 as many as 116 specialists from 26 countries, including some of the world’s leading robotics and artificial intelligence pioneers, called on the United Nations to ban the development and use of killer robots.

They even said that this arms race threatens to usher in the “third revolution” in warfare after gunpowder and nuclear arms. They wrote, “Once developed lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend.

“These can be weapons of terror, weapons that despots and terrorists use against innocent population, and weapons hacked to behave in undesirable ways.” They warned: “We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

 Ryan Gariepy, the founder of Clearpath Robotics, has said, “Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapon systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”

Out of the loop

In January 2016, The Economist noted in its special report titled “The Future of War”, “At least the world knows what is like to live in the shadow of nuclear weapons. There are much bigger question marks over how the rapid advances in artificial intelligence and deep learning will affect the way wars are fought, and perhaps even the way people think of war. The big concern is that these technologies may create autonomous weapon systems that can make choices about killing humans independently of those who created or deployed them.”

This special report distinguished between three types of AI weapons or robot weapons: “in the loop”, with a human constantly monitoring the operation and remaining in charge of critical decisions; “on the loop”, with a human supervising machines that can intervene at any stage of the mission; or “out of the loop”, with the machine carrying out the mission without any human intervention once launched.

Fully autonomous robot weapons – the third category – are obviously the most dangerous. 

A letter warning against the coming race of these weapons was signed in 2015 by more than 1,000 AI experts. An international campaign called Campaign to Stop Killer Robots is working on a regular basis for this and related objectives. Tech pioneer Elon Musk has pinpointed competition for AI superiority at national levels as the “most likely cause of World War 3”.

A recent widely discussed review of new weapons notes that “the biggest change in the way wars are fought will come from deploying lots of robots simultaneously”.

Paul Scharre, an expert on autonomous weapons, has written that “collectively, swarms of robotic systems have the potential for even more dramatic, disruptive change to military operations”.

One possibility he mentions is that tiny 3D-printed drones could be formed into smart clouds that can permeate a building or be air-dropped over a wide area to look for hidden enemy forces.

Several countries are surging ahead with rapid advances in robot weapons. In 2014 the Pentagon announced its “Third Offset Strategy” with its special emphasis on robotics, autonomous systems and Big Data. This is supposed to help the US maintain its military superiority.

In July 2017 China presented its “Next-Generation Artificial-Intelligence Development Plan”, which gives a crucial role to AI as the transformative technology in civil as well as military areas, with emphasis on “military-civil fusion”.

As the arms race for AI weapons escalates, there will be a temptation all the time to actually use them to test their capabilities. Peter Singer, an expert on future warfare at the think-tank New America, has said that very powerful forces propel the AI arms race – geopolitical compulsions, scientific advances and profit-seeking high technology companies.

The Stop Killer Robots campaign wants a legally binding international treaty banning LAWs. This is an issue that should get the increasing support of all people who believe firmly in peace and disarmament.

BAHRAT DOGRA is a freelance journalist who has been involved with several social movements and initiatives.

Facebook Twitter
More in Opinion
Editor’s Picks
Top News