Elon Musk warns of ‘civilisational risk’ posed by AI at historic gathering of tech giant chiefs

‘This is an emerging technology, there are important equities to balance here,’ Zuckerberg says

Vishwam Sankaran
Thursday 14 September 2023 00:39 EDT
Comments
Related video: Majority Leader Chuck Schumer pushes back on criticism from Elon Musk over the Senate’s closed door AI meeting.

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Tesla titan and multi-billionaire Elon Musk has reportedly warned US senators at a private meeting that unregulated artificial intelligence technology poses a “civilisational risk” to society.

Senate majority leader Chuch Schumer convened a meeting of the most prominent tech executives in the US to help pass a bipartisan legislation encouraging both the rapid development of AI technology and also mitigating its biggest risks.

The closed-door meeting was attended by some of the tech industry’s biggest names, including Tesla and SpaceX boss Mr Musk, Meta’s Mark Zuckerberg, former Microsoft chief Bill Gates, Alphabet’s Sundar Pichai, as well as OpenAI founder Sam Altman.

As Mr Musk left the Capitol building following several hours of the meeting, he told reporters that “we have to be proactive rather than reactive” in regulating AI as its consequences of going wrong are “severe”.

“The question is really one of civilizational risk. It’s not like … one group of humans versus another. It’s like, hey, this is something that’s potentially risky for all humans everywhere,” he said, according to NBC News.

Mr Musk also reportedly called for a government AI agency, similar to the Securities and Exchange Commission or the Federal Aviation Administration to oversee developments in the sector and ensure safety.

Leaders in the tech industry also called for a balanced approach towards regulating AI.

In his prepared remarks, Mr Zuckerberg said the two defining issues for AI are “safety and access”, adding that the US Congress should “engage with AI to support innovation and safeguards”.

“New technology often brings new challenges, and it’s on companies to make sure we build and deploy products responsibly,” the Meta chief said.

“This is an emerging technology, there are important equities to balance here, and the government is ultimately responsible for that,” he added.

The Facebook founder called for policymakers, academics, civil society and industry to work together to minimise the potential risks of AI, but also to maximise its potential benefits.

Some of the measures he suggested for building safeguards into AI systems included “selecting the data to train with, extensively red-teaming internally and externally to identify and fix issues, fine-tuning the models for alignment, and partnering with safety-minded cloud providers to add additional filters to the systems we release”.

As lawmakers at the US Capitol Hill interacted with tech giant chiefs about potential AI regulations, companies including Microsoft, OpenAI, Meta, Alphabet, and Amazon were also being probed on the conditions of the workers behind tools like ChatGPT, Bing, and Bard.

Lawmakers are reportedly probing the working conditions of data labelers who are tasked by companies, often at outsourced firms, to label data used to train AI and for rating chatbot responses.

“Despite the essential nature of this work, millions of data workers around the world perform these stressful tasks under constant surveillance, with low wages and no benefits,” lawmakers, including Elizabeth Warren and Edward Markey, said in a letter to tech executives.

“These conditions not only harm the workers, they also risk the quality of the AI systems –potentially undermining accuracy, introducing bias, and jeopardizing data protection,” they said.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in