Artificially intelligent bots could threaten the world and more needs to be done, experts warn

Threats include possibilities that are hard even to imagine, like the ability to generate entirely believable, but completely fake, videos

Andrew Griffin
Tuesday 20 February 2018 20:23 EST
Comments
(Reuters)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The future of the world is under threat from artificial intelligence and more must be done to keep people safe, experts have urged.

A new report compiled by 26 of the world’s leading experts paints a terrifying picture of the world in the next 10 years. Physical attacks as well as those on our digital worlds and political system could drastically undermine the safety of humanity, it warns, and people must work together now if they want to keep the world safe.

The use of artificial intelligence is likely to empower all kinds of people – including rogue states, criminals, and terrorists, the report warns. If people including policymakers and researchers don’t work together on that threat, it could permeate into some of the most fundamental parts of our lives.

Boston Dynamics releases video of Atlas robot's new tricks

That could range from attacks from drones to bots being used to manipulate our news agenda and elections, warns the report, titled ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’. Many of the attacks could come in forms that are hard even to imagine, such as speech synthesis tools and video creation technologies that could allow people to create entirely believable, but completely fake, videos.

To fight against that, the number of people being consulted on how to ward away the threat from AI should be vastly expanded, the 26 experts suggest. They include people who have ensured that other dual use technologies – those which can be used for both military and civilian work, such as computer security – are not used to cause such damage.

The report has been compiled by experts from many of the world’s leading institutions and artificial intelligence research organisations, who claim that it is the first time that the intersection of artificial intelligence and its misuse in the world have been examined in such a way. The report includes input from representatives from OpenAI, the research group founded by Elon Musk; Oxford University’s Future of Humanity Institute; and Cambridge University’s Centre for the Study of Existential Risk.

Dr Seán Ó hÉigeartaigh, executive director of Cambridge University’s Centre for the Study of Existential Risk and one of the co-authors, said: “Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to ten years.

“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real. There are choices that we need to make now, and our report is a call-to-action for governments, institutions and individuals across the globe.

Four legged robots skillfully open door at futuristic Boston Dynamics lab

“For many decades hype outstripped fact in terms of AI and machine learning. No longer. This report looks at the practices that just don’t work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this.”

Miles Brundage, research fellow at Oxford University’s Future of Humanity Institute, said: “AI will alter the landscape of risk for citizens, organisations and states – whether it’s criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance, profiling and repression – the full range of impacts on security is vast.

“It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it. It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in