Former OpenAI scientist who left under mysterious circumstances forms new company

Ilya Sutskever helped found the company that would make ChatGPT – but later left

Andrew Griffin
Thursday 20 June 2024 10:28 EDT
Comments
(Anadolu via Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

OpenAI’s former chief scientist and one of its co-founders, who left in mysterious circumstances, has launched a new project.

Ilya Sutskever helps create the company that would go on to build ChatGPT. But he left OpenAI earlier this year, after a rocky period that saw it try to remove its chief executive, Sam Altman.

Reports have suggested that Mr Sutskever was one of many concerned that OpenAI was focusing too much on the commercial possibilities of its technology, and not enough on the safety issues that it had been created to address.

Now Mr Sutskever said he has launched a new organisation, focused specifically on safety. It is called Safe Superintelligence and that building safe AI was “our mission, our name, and our entire product roadmap”.

In a launch statement on the new company’s website, the firm said it would approach “safety and capabilities in tandem” as “technical problems to be solved” and to “advance capabilities as fast as possible while making sure our safety always remains ahead”.

Some critics have raised concerns that major tech and AI firms are too focused on reaping the commercial benefits of the emerging technology, and are neglecting safety principles in the process - an issue raised in recent months by several former OpenAI staff members when announcing they were leaving the company.

Elon Musk, a co-founder of OpenAI, has also accused the company of abandoning its original mission to develop open-source AI to focus on commercial gain.

In what appeared to be a direct response to those concerns, Safe Superintelligence’s launch statement said: “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

Mr Sutskever was involved in the high-profile attempt to oust Sam Altman as OpenAI chief executive last year, and was removed from the company’s board following Mr Altman’s swift return before leaving the company in May this year.

He has been joined at Safe Superintelligence by former OpenAI researcher Daniel Levy and former Apple AI lead Daniel Gross - both are named as co-founders at the new firm, which has offices in California and Tel Aviv, Israel.

The trio said the company was “the world’s first straight-shot SSI (safe superintelligence) lab, with one goal and one product: a safe superintelligence”, calling it the “most important technical problem of our time”.

Additional reporting by agencies

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in