Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Google bans development of artificial-intelligence that could be used for weapons, CEO says

The technology company has responded to criticism and employee resignations over a contract with the US Defence Department, which critics argued pushed Google closer to the 'business of war'

Drew Harwell
Friday 08 June 2018 14:05 EDT
Comments
Google CEO Sundar Pichai announced new ethical guidelines at the company's I/O conference in California
Google CEO Sundar Pichai announced new ethical guidelines at the company's I/O conference in California (AP)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Google is banning the development of artificial-intelligence (AI) software that can be used in weapons, chief executive Sundar Mr Pichai said, setting strict new ethical guidelines for how the tech giant should conduct business in an age of increasingly powerful AI.

The new rules could set the tone for the deployment of AI far beyond Google, as rivals in Silicon Valley and around the world compete for supremacy in self-driving cars, automated assistants, robotics, military AI and other industries.

"We recognise that such powerful technology raises equally powerful questions about its use," Mr Mr Pichai wrote in a blog post. "As a leader in AI, we feel a special responsibility to get this right."

The ethical principles are a response to a firestorm of employee resignations and public criticism over a Google contract with the US Defence Department for software that could help analyse drone video, which critics argued had nudged the company one step closer to the "business of war." Google executives said last week that they would not renew the deal for the military's AI endeavour, known as Project Maven, when it expires next year.

Google, Mr Mr Pichai said, will not pursue the development of AI when it could be used to break international law, cause overall harm or surveil people in violation of "internationally accepted norms of human rights."

The company will, however, continue to work with governments and the military in cybersecurity, training, veterans health care, search and rescue, and military recruitment, Mr Mr Pichai said. The Web giant - famous for its past "Don't be evil" mantra - is in the running for two multi-billion dollar US Defence Department contracts for office and cloud services.

Google's $800bn parent company, Alphabet, is considered one of the world's leading authorities on AI and employs some of the field's top talent, including at its London-based subsidiary DeepMind.

But the company is steeped in a fierce competition for researchers, engineers and technologies with Chinese AI firms and domestic competitors, such as Facebook and Amazon, who could contend for the kinds of lucrative contracts Google says it will give up.

Stephen Hawking has a terrifying warning about artificial intelligence

The principles offer limited detail into how the company would seek to follow its rules. But Mr Pichai outlined seven core tenets for its AI applications, including that they be socially beneficial, be built and tested for safety, and avoid creating or reinforcing unfair bias. The company, Mr Pichai said, would also evaluate its work in AI by examining how closely its technology could be "adaptable to a harmful use."

AI is a critical piece of Google's namesake Web tools, including in image search and recognition, and automatic language translation. But it is also key to its future ambitions, many of which involve ethical minefields of their own, including its self-driving Waymo division and Google Duplex, a system that can be used to make dinner reservations by mimicking a human's voice over the phone.

But Google's new limits appear to have done little to slow the Pentagon's technological researchers and engineers, who say other contractors will still compete to help develop technologies for the military and national defence. Peter Highnam, the deputy director of the Defence Advanced Research Projects Agency, the Pentagon agency that did not handle Project Maven but is credited with helping invent the Internet, said there are "hundreds if not thousands of schools and companies that bid aggressively" on DARPA's research programmes in technologies such as AI.

"Our goal, our objective, is to create and prevent technological surprise. So we're looking at what's possible," John Everett, a deputy director of DARPA's Information Innovation Office, said in an interview on Wednesday. "Any organisation is free to participate in this ongoing exploration or not."

The Washington Post

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in