AI is highly likely to destroy humans, Elon Musk warns
'Should that be controlled by a few people at Google with no oversight?'
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Elon Musk believes it’s highly likely that artificial intelligence (AI) will be a threat to people.
The Tesla founder is concerned that a handful of major companies will end up in control of AI systems with “extreme” levels of power.
In Mr Musk’s opinion, there’s a very small chance that humans will be safe from such systems.
“Maybe there's a five to 10 percent chance of success [of making AI safe],” he told Neuralink staff after showing them a documentary on AI, reports Rolling Stone.
He also told them that he invested in DeepMind in order to keep an eye on Google’s development of AI.
Mr Musk has called for the companies working on AI to slow down to ensure they don’t unintentionally build something unsafe.
“Between Facebook, Google and Amazon – and arguably Apple, but they seem to care about privacy – they have more information about you than you can remember,” he told Rolling Stone.
“There's a lot of risk in concentration of power. So if AGI [artificial general intelligence] represents an extreme level of power, should that be controlled by a few people at Google with no oversight?”
Though he didn't expand on what sort of threat it could pose, he's previously said that AI is “a fundamental risk” to the existence of human civilisation.
He believes its development needs to be regulated “proactively”.
“I have exposure to the most cutting-edge AI and I think people should be really concerned about it,” he said in July.
“I keep sounding the alarm bell but until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal."
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments