Elon Musk: AI is a ‘fundamental existential risk for human civilisation’ and creators must slow down
'I think we should be really concerned'
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Elon Musk has branded artificial intelligence “a fundamental existential risk for human civilisation”.
He says we mustn’t wait for a disaster to happen before deciding to regulate it, and that AI is, in his eyes, the scariest problem we now face.
He also wants the companies working on AI to slow down to ensure they don’t unintentionally build something unsafe.
The CEO of Tesla and SpaceX was speaking on-stage at the National Governor’s Association at the weekend.
“I have exposure to the most cutting-edge AI and I think people should be really concerned about it,” he said. “I keep sounding the alarm bell but until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal.
“I think we should be really concerned about AI and I think we should… AI’s a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.
“Normally the way regulations are set up is that a whole bunch of bad things happen, there’s a public outcry, and then after many years, a regulatory agency is set up to regulate that industry. There’s a bunch of opposition from companies who don’t like being told what to do by regulators. It takes forever.
“That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation. AI is a fundamental risk to the risk of human civilisation, in a way that car accidents, airplane crashes, faulty drugs or bad food were not. They were harmful to a set of individuals in society, but they were not harmful to society as a whole.
“AI is a fundamental existential risk for human civilisation, and I don’t think people fully appreciate that.”
However, he recognises that this will be easier said than done, since companies don’t like being regulated.
Also, any organisation working on AI will be “crushed” by competing companies if they don’t work as quickly as possible, he said. It would be up to a regulator to control all of them.
“When it’s cool and regulators are convinced that it’s safe to proceed, then you can go. But otherwise, slow down.”
He added: “I think we’d better get on [introducing regulation] with AI, pronto. There’ll certainly be a lot of job disruption because what’s going to happen is robots will be able to do everything better than us. I’m including all of us.”
Earlier this year, Mr Musk said that humans will have to merge with machines to avoid becoming irrelevant.
Ray Kurzweil, a futurist and Google’s director of engineering, believes that computers will have “human-level intelligence” by 2029.
However, he believes machines will improve humans, making us funnier, smarter and even sexier.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments