Google workers resign in protest as AI experts call for end to autonomous weapons project

'We believe that Google should not be in the business of war'

Anthony Cuthbertson
Monday 21 May 2018 15:23 EDT
Comments
What is Project Maven?: the Pentagon's AI project with Google

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Artificial intelligence researchers have called on Google to abandon a project developing AI technology for the military, warning that autonomous weapons directly contradict the firm’s famous ‘Don’t Be Evil’ motto.

The experts join more than 3,100 of Google’s own employees, who signed an open letter last month protesting the company’s involvement in a controversial Pentagon program called Project Maven.

The partnership between the technology giant and the US Military involves using customised AI surveillance software to analyse data from drone footage in order to better recognise target objects, such as distinguishing between a human on the ground and a vehicle.

Around a dozen employees have reportedly resigned in protest at Google’s refusal to cut ties with the US military, each one citing ethical concerns to Gizmodo. Google did not respond to a request for comment from The Independent.

In their letter last month to Google CEO Sundar Pichai, the employees wrote: "We believe that Google should not be in the business of war... We cannot outsource the moral responsibility of our technologies to third parties."

The researchers warn that the military could ultimately remove human oversight from drone strikes entirely, if Google’s technology proves effective.

“As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems,” the letter states.

“If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection - no technology has higher stakes - than algorithms meant to target and kill at a distance and without public accountability.”

Other fears detailed in the letter include the possibility of Google integrating the personal data of its users with military surveillance data for the purpose of targeted killing.

The use of such data would violate the public trust that is fundamental to the operation of Google’s business and would put the lives and human rights of its users in jeopardy, according to the researchers.

"The responsibilities of global companies like Google must be commensurate with the transnational makeup of their users," the letter states.

"While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief."

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in