Killer robots: No one liable if future machines decide to kill, says Human Rights Watch

Computer programmers, manufacturers and military personnel would all escape liability for unlawful deaths caused by fully autonomous weapons

Chris Green
Thursday 09 April 2015 13:50 EDT
Comments
An unmanned Taranis drone (nicknamed Raptor) in front of a piloted fighter jet
An unmanned Taranis drone (nicknamed Raptor) in front of a piloted fighter jet (BAE Systems)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

If a soldier pulls a trigger on the battlefield with lethal consequences, then they bear the ultimate responsibility for their actions. But what if the same act was carried out by a robot, with no human involvement?

Under current laws, computer programmers, manufacturers and military personnel would all escape liability for unlawful deaths and injuries caused by fully autonomous weapons, or “killer robots”, a major report has warned.

Machines with the ability to take decisions to kill are no longer the preserve of science fiction films, it argues, pointing out that the technology which could give rise to such weapons is “already in use or development” in countries including the UK and US.

The report, by Human Rights Watch (HRW) and Harvard Law School’s International Human Rights Clinic, comes ahead of a United Nations meeting next week at which the role of autonomous weapons in warfare will be discussed.

While military commanders could be found guilty if they intentionally instructed a killer robot to commit a crime, they would be unlikely to face prosecution if they were able to argue that it had acted of its own volition, the report concluded.

The researchers added that although victims or their families could pursue civil lawsuits against the deadly machine’s manufacturers or operators, this would only entitle them to compensation and would be “no substitute for criminal accountability”.

Bonnie Docherty, a lecturer at the Harvard Law School clinic and the report’s lead author, said: “No accountability means no deterrence of future crimes, no retribution for victims, no social condemnation of the responsible party. The many obstacles to justice for potential victims show why we urgently need to ban fully autonomous weapons.”

Such machines would move beyond existing remote-controlled drones as they would be able to select and engage targets without a human being “in the loop” – raising a variety of serious ethical and legal concerns, the report said.

“Existing mechanisms for legal accountability are ill suited and inadequate to address the unlawful harms fully autonomous weapons might cause,” the authors wrote. “These weapons have the potential to commit criminal acts – unlawful acts that would constitute a crime if done with intent – for which no one could be held responsible.”

The report added that even if a robot’s commander knew it was about to commit a potentially unlawful act, they may be unable to stop it if communications had broken down, if the robot acted too fast, or if reprogramming was only possible by specialists. “In addition, ‘punishing’ the robot after the fact would not make sense,” the authors added.

Campaigners would like the use of such robots to be pre-emptively banned through a new international law. This would have to be written into the UN’s Convention on Conventional Weapons, which in 1995 outlawed the use of laser weapons with the ability to blind people before they could be developed.

In the UK, ministers have said there are no plans for the military to create weapons capable of autonomous killing. But David Mepham, the UK director of HRW, said the next Government “should not hesitate to back a pre-emptive ban on their development, production and use” by other countries around the world.

Professor Noel Sharkey, a leading roboticist at Sheffield University and co-founder of the International Committee on Robot Arms Control, said that if a machine committed a war crime its commander would have “lots of places to hide” to evade justice, such as blaming the software or the manufacturing process.

“If you wanted to use an autonomous robot to commit a war crime, the first thing you’d do is blow it up so nobody could do forensics on it,” he said.

He added that in the US, the latest prototypes involved “swarms” of robotic gun-boats which could be deployed to engage an enemy, communicating with each other to select targets. Although a human would be able to deploy the robots and call them back, they “wouldn’t be controlling the individual kill decisions”, he said.

If a law was passed by the UN, it would have to be very carefully defined so that defensive weapons systems which automatically detect incoming missiles and mortar fire – but did not threaten human life – may still be used, he added.

Thomas Nash, the director of UK-based weapons monitoring organisation Article 36, said the possible creation of killer robots was a “genuine concern” and that unmanned drones with the ability to select their own targets were already in operation. “It’s not a big step from there to devolve the capability for those systems to release a missile based on a pre-programmed algorithm,” he added.

Weapons of the future?

Taranis

A prototype stealth combat drone which is is said to represent “the pinnacle of UK engineering and aeronautical design”. Able to conduct surveillance, mark targets and carry out air strikes, the RAF and Ministry of Defence stress that there is always a human in control – but Taranis is also capable of “full autonomy”.

SGR-1

Standing for “Sentry Guard Robot”, this fixed-position weapon is capable of tracking and engaging human targets with a mounted grenade launcher or machine gun – once permission is granted by a soldier back at base. It is currently in use in South Korea, where it is used on the border with North Korea.

X47-B

Developed by US defence firm Northrop Grumman, this unmanned combat aircraft has the ability to to take off and land on an aircraft carrier without human intervention. Although it has a full-sized weapons bay, prototypes tested so far have been unarmed. The current aim is for the drone to be “battlefield ready” by the 2020s.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in