Former OpenAI chief scientist launches own AI company

OpenAI co-founder Ilya Sutskever said Safe Superintelligence would focus on the safe development of AI.

Martyn Landi
Thursday 20 June 2024 04:16 EDT
Ilya Sutskever is launching Safe Superintelligence (Tim Goode/PA)
Ilya Sutskever is launching Safe Superintelligence (Tim Goode/PA) (PA Wire)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The former chief scientist and co-founder of OpenAI has announced the launch of his own artificial intelligence (AI) company, which he said would focus on safety.

Ilya Sutskever said he was launching Safe Superintelligence and that building safe AI was “our mission, our name, and our entire product roadmap”.

In a launch statement on the new company’s website, the firm said it would approach “safety and capabilities in tandem” as “technical problems to be solved” and to “advance capabilities as fast as possible while making sure our safety always remains ahead”.

Some critics have raised concerns that major tech and AI firms are too focused on reaping the commercial benefits of the emerging technology, and are neglecting safety principles in the process – an issue raised in recent months by several former OpenAI staff members when announcing they were leaving the company.

Elon Musk, a co-founder of OpenAI, has also accused the company of abandoning its original mission to develop open-source AI to focus on commercial gain.

In what appeared to be a direct response to those concerns, Safe Superintelligence’s launch statement said: “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

Mr Sutskever was involved in the high-profile attempt to oust Sam Altman as OpenAI chief executive last year, and was removed from the company’s board following Mr Altman’s swift return before leaving the company in May this year.

He has been joined at Safe Superintelligence by former OpenAI researcher Daniel Levy and former Apple AI lead Daniel Gross – both are named as co-founders at the new firm, which has offices in California and Tel Aviv, Israel.

The trio said the company was “the world’s first straight-shot SSI (safe superintelligence) lab, with one goal and one product: a safe superintelligence”, calling it the “most important technical problem of our time”.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in