‘Human extinction’: OpenAI workers raise alarm about the risks of artificial intelligence
AI experts argued developers in field need strong worker protections to be able to warn public about nature of new technologies
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.A group of current and former employees at top Silicon Valley firms developing artificial intelligence warned in an open letter that without additional safeguards, AI could pose a threat of “human extinction.”
The letter, signed by 13 mostly former employees of firms like OpenAI, Anthropic, and Google’s DeepMind, argues top AI researchers need more protections to air criticisms of new developments and seek input from the public and policymakers over the direction of AI innovation.
“We believe in the potential of AI technology to deliver unprecedented benefits to humanity,” the Tuesday letter reads. “We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”
The letter argues that the companies developing powerful AI technologies, including artificial general intelligence (AGI), a theorised AI system that’s as smart or smarter than human intelligence, “have strong financial incentives to avoid effective oversight,” from both their own employees and the public at large.
Neel Nanda of DeepMind is the only AI researcher currently affiliated with one of the copmanies who signed the letter.
“This was NOT because I currently have anything I want to warn about at my current or former employers, or specific critiques of their attitudes towards whistleblowers,” he wrote on X. “But I believe AGI will be incredibly consequential and, as all labs acknowledge, could pose an existential threat. Any lab seeking to make AGI must prove itself worthy of public trust, and employees having a robust and protected right to whistleblow is a key first step.”
The message calls for companies to refrain from punishing or silencing current or former employees who speak out about the risks of AI, a likely reference to a scandal this month at OpenAI, where departing employees were told to choose between losing vested equity and or signing a non-disparagement agreement about the company that never expired. (OpenAI later lifted the requirement, saying, “It doesn’t reflect our values or the company we want to be.”)
“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk,” an OpenAI spokesperson told The Independent. “We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world.”
The company added that it takes a number of steps to ensure its employees are heard and its products are developed responsibly, including an anonymous hotline for workers and a Safety and Security Committee scrutizing the company’s developments. OpenAI also pointed to its support for increased AI regulation and voluntary committments around AI safety.
The open letter follows a season of controversies for top AI firms, especially OpenAI, at the same time as they introduce AI assistants and aids with powerful new capabilities like the ability to have live voice-to-voice conversations with humans and react to visual information like a video feed or written math problem.
Actress Scarlett Johansson, who once voiced an AI assistant in the film Her, accused OpenAI of using her voice as a model for one of its products, despite her explicit rejection of such an alleged proposal. Though OpenAI CEO tweeted the word “her” on the launch of the voice assistant, the company has since denied using Johansson’s voice as a model.
Also in May, OpenAI disbanded a team it formed specifically to research long-term risks of AI, less than a year after it was formed.
Numerous top researchers at the firm have left in recent months, including co-founder Ilya Sutskever.
It’s the latest shake-up since the company suffered a high-profile board battle last year, in which Altman was temporarily removed and then reinstated less than a week later.
The Independent has contacted Anthropic and Google for comment.
Subscribe to Independent Premium to bookmark this article
Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments