Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Killer robots are not science fiction – they have been part of military defence for a while

They are just one of the fears with developing technology, but such bots have been here for much longer than you think, writes Mike Ryder

Tuesday 09 April 2019 15:36 EDT
Comments
Killer robots may seem new, but they've been around for a long time
Killer robots may seem new, but they've been around for a long time (Getty/iStock)

Your support helps us to tell the story

This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.

The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.

Help us keep bring these critical stories to light. Your support makes all the difference.

Humans will always make the final decision on whether armed robots can shoot, according to a statement by the US Department of Defence. Their clarification comes amid fears about a new advanced targeting system, known as Atlas, that will use artificial intelligence in combat vehicles to target and execute threats. While the public may feel uneasy about so-called “killer robots”, the concept is nothing new – machine-gun wielding “Swords” robots were deployed in Iraq as early as 2007.

Our relationship with military robots goes back even further than that. This is because when people say “robot”, they can mean any technology with some form of autonomous element that allows it to perform a task without the need for direct human intervention.

These technologies have existed for a very long time. During the Second World War, the proximity fuse was developed to explode artillery shells at a predetermined distance from their target. This made the shells far more effective than they would otherwise have been by augmenting human decision making and, in some cases, taking the human out of the loop completely.

So the question is not so much whether we should use autonomous weapon systems in battle – we already use them, and they take many forms. Rather, we should focus on how we use them, why we use them, and what form – if any – human intervention should take.

The birth of cybernetics

My research explores the philosophy of human-machine relations, with a particular focus on military ethics, and the way we distinguish between humans and machines. During the Second World War, mathematician Norbert Wiener laid the groundwork of cybernetics – the study of the interface between humans, animals and machines – in his work on the control of anti-aircraft fire. By studying the deviations between an aircraft’s predicted motion, and its actual motion, Wiener and his colleague Julian Bigelow came up with the concept of the “feedback loop”, where deviations could be fed back into the system in order to correct further predictions.

Wiener’s theory therefore went far beyond mere augmentation, for cybernetic technology could be used to pre-empt human decisions – removing the fallible human from the loop, in order to make better, quicker decisions and make weapons systems more effective.

In the years since the Second World War, the computer has emerged to sit alongside cybernetic theory to form a central pillar of military thinking, from the laser-guided “smart bombs” of the Vietnam era to cruise missiles and Reaper drones.

It’s no longer enough to merely augment the human warrior as it was in the early days. The next phase is to remove the human completely – “maximising” military outcomes while minimising the political cost associated with the loss of allied lives. This has led to the widespread use of military drones by the US and its allies. While these missions are highly controversial, in political terms they have proved to be preferable by far to the public outcry caused by military deaths.

Military drones are widely used by the US and its allies
Military drones are widely used by the US and its allies (Getty Images)

The human machine

One of the most contentious issues relating to drone warfare is the role of the drone pilot or “operator”. Like all personnel, these operators are bound by their employers to “do a good job”. However, the terms of success are far from clear. As philosopher and cultural critic Laurie Calhoun observes:

“The business of UCAV [drone] operators is to kill.”

In this way, their task is not so much to make a human decision, but rather to do the job that they are employed to do. If the computer tells them to kill, is there really any reason why they shouldn’t?

A similar argument can be made with respect to the modern soldier. From GPS navigation to video uplinks, soldiers carry numerous devices that tie them into a vast network that monitors and controls them at every turn.

This leads to an ethical conundrum. If the purpose of the soldier is to follow orders to the letter – with cameras used to ensure compliance – then why do we bother with human soldiers at all? After all, machines are far more efficient than human beings and don’t suffer from fatigue and stress in the same way as a human does. If soldiers are expected to behave in a programmatic, robotic fashion anyway, then what’s the point in shedding unnecessary allied blood?

The answer, here, is that the human serves as an alibi or form of “ethical cover” for what is in reality, an almost wholly mechanical, robotic act. Just as the drone operator’s job is to oversee the computer-controlled drone, so the human’s role in the Department of Defence’s new Atlas system is merely to act as ethical cover in case things go wrong.

While Predator and Reaper drones may stand at the forefront of the public imagination about military autonomy and “killer robots”, these innovations are in themselves nothing new. They are merely the latest in a long line of developments that go back many decades.

While it may comfort some readers to imagine that machine autonomy will always be subordinate to human decision making, this really does miss the point. Autonomous systems have long been embedded in the military and we should prepare ourselves for the consequences.

Support free-thinking journalism and attend Independent events

Mike Ryder is an associate lecturer in philosophy at Lancaster University. This article originally appeared in The Conversation

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in