Home Politics Military What’s the Future of Killer Robots?

What’s the Future of Killer Robots?

4679
SHARE
Defence industry and military meet at Bydgoszcz Air Fair
People are seen attending the 2017 Air Fair in Bydgoszcz, Poland on 26 May, 2017. The fair is organized at the local air force base and is a chance for members of the industry and member of the military to meet. In addition military hardware is on display and open for the general public to view. (Photo by Jaap Arriens/NurPhoto via Getty Images)

What was in the letter released by 116 Entrepreneurs: Late Sunday, 116 entrepreneurs, including Elon Musk, released a letter to the United Nations warning of the dangerous “Pandora’s Box” presented by weapons that make their own decisions about when to kill. Publications including The Guardian and The Washington Post ran headlines saying Musk and his cosigners had called for a “ban” on “killer robots.”
Those headlines were misleading. The letter doesn’t explicitly call for a ban, although one of the organizers has suggested it does. Rather, it offers technical advice to a UN committee on autonomous weapons formed in December. The group’s warning that autonomous machines “can be weapons of terror” makes sense. But trying to ban them outright is probably a waste of time.

What the difference between weapons and autonomous weapons: Weapons systems that make their own decisions are a very different, and much broader, category. The line between weapons controlled by humans and those that fire autonomously is blurry, and many nations—including the US—have begun the process of crossing it. Moreover, technologies such as robotic aircraft and ground vehicles have proved so useful that armed forces may find giving them more independence—including to kill—irresistible.

What do these new weapons technologies mean for military power: A recent report on artificial intelligence and war commissioned by the Office of the Director of National Intelligence concluded that the technology is set to massively magnify military power.
The US Department of Defense has a policy to keep a “human in the loop” when deploying lethal force. Pentagon spokesperson Roger Cabiness said that the US has declined to endorse a ban on autonomous weapons, noting that the department’s Law of War Manual specifies that autonomy can help forces meet their legal and ethical obligations. “For example, commanders can use precision-guided weapon systems with homing functions to reduce the risk of civilian casualties,” said Cabiness. In 2015, the UK government responded to calls for a ban on autonomous weapons by saying there was no need for one, and that existing international law was sufficient.

What autonomous weapons are already on the market: You don’t have to look far to find weapons already making their own decisions to some degree. One is the AEGIS ship-based missile and aircraft-defense system used by the US Navy. It is capable of engaging approaching planes or missiles without human intervention, according to a CNAS report.
Other examples include a drone called the Harpy, developed in Israel, which patrols an area searching for radar signals. If it detects one, it automatically dive-bombs the signal’s source. Manufacturer Israeli Aerospace Industries markets the Harpy as a “‘Fire and Forget’ autonomous weapon.”

President Trump Participates In Strategic And Policy Forum At The White House
Elon Musk, co-founder and chief executive officer of Tesla Motors Inc., right, and Steve Bannon, chief strategist for U.S. President Donald Trump, attend a Strategic and Policy Forum meeting with Trump, not pictured, in the State Dining Room of the White House in Washington, D.C., U.S., on Friday, Feb. 3, 2017. The gathering of the 18-member group, led by Blackstone Group LP CEO Steve Schwarzman, will give Americas first billionaire commander-in-chief a chance to reprise his Apprentice role on a grand scale. Photographer: Andrew Harrer/Bloomberg via Getty Images
Musk signed an earlier letter in 2015 alongside thousands of AI experts in academia and industry that called for a ban on offensive use of autonomous weapons. Like Sunday’s letter, it was supported, and published, by the Future of Life Institute, an organization that ponders long-term effects of AI and other technologies, and to which Musk has gifted $10 million.

 

 

Why we should consider something other than a total ban: Rebecca Crootof, a researcher at Yale Law School, says people concerned about autonomous weapons systems should consider more constructive alternatives to campaigning for a total ban.

That time and energy would be much better spent developing regulations. – Rebecca Crootof Yale Law School Researcher

International laws such as the Geneva Convention that restrict the activities of human soldiers could be adapted to govern what robot soldiers can do on the battlefield, for example. Other regulations short of a ban could try to clear up the murky question of who is held legally accountable when a piece of software makes a bad decision, for example by killing civilians.

Learn More @ Wired.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here