Elon Musk says artificial intelligence is 'more dangerous than nukes'

Billionaire PayPal founder says that it's "increasingly probable" that humanity is simply the preliminary step in creating a "digital superintelligence"

James Vincent
Tuesday 05 August 2014 08:33 EDT
Comments
The ATLAS robot as seen competing in the Darpa Robotic Challenge
The ATLAS robot as seen competing in the Darpa Robotic Challenge (Darpa)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Elon Musk, the US billionaire behind projects such as SpaceX and Tesla, has warned that artificial intelligence is “potentially more dangerous than nukes”.

Musk added that humanity should be "super careful" with such technology, making the comments while recommending Superintelligence, a book by Nick Bostrom that explores the future of humanity when machines surpass us in intelligence.

Bostrom, a Swedish philosopher at the University of Oxford and director of The Future of Humanity Institute there, says that most scientists agree that the creation of a human-level AI is inevitable, reporting that 90 per cent of top researchers guess that we’ll achieve this goal between 2075 and 2090.

However, he argues, the really important issue is what we do with this first super intelligent creation - and how we build it. Whatever AI surpasses human-level intelligence first will have the advantage over pretty much everything and everyone else on Earth.

Bostrom says that if we accidently create an AI that is anything less than well-inclined towards humans (comparisons have been made with the whimsical but ultimately benevolent computer ‘Minds’ in Iain M. Banks’ Culture novels) then the results could be disastrous.

But if we do create a superintelligence that is obedient or endowed with a sense of ethics like Isaac Asimov’s Three Laws of Robotics then the rewards could also be staggering, accelerating human progress at unimaginable rates. After all, how can we even begin to imagine what a post-human intelligence is capable of when we are still resolutely human?

Musk, however, is apparently inclined towards gloomier predictions, with his subsequent tweet imagining humanity as the “biological boot loader” (the preliminary bit of software that loads an operating system) for a “digital superintelligence”. Thanks, Elon, we’ve seen the Matrix too.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in