The Longer Read

Doomers vs tech boomers: inside OpenAI’s bizarre boardroom battle with the man ‘who can see the future’

In seven days the AI powerhouse burned through three CEOs, two boards and a boss who was fired and then reinstated when the staff threatened to walk out. So what is it about Sam Altman – the ‘Albert Einstein of tech’ – that triggered such a chaotic chain reaction? Chris Stokel-Walker investigates...

Wednesday 22 November 2023 14:44 EST
Comments
Survival instinct: OpenAI CEO Sam Altman is described as possessing such a persuasive personality that ‘you could parachute him into an island full of cannibals and come back in five years and he’d be the king’
Survival instinct: OpenAI CEO Sam Altman is described as possessing such a persuasive personality that ‘you could parachute him into an island full of cannibals and come back in five years and he’d be the king’ (Getty/iStock)

It may go down as one of the most bizarre boardroom takeovers in modern business history – and perhaps the most consequential for us all, given the company to which it happened.

OpenAI, the Californian firm spearheading the generative AI revolution with the release of ChatGPT less than one year ago, has undergone a frenetic five days that has seen it cycle through three CEOs and two executive boards – only to end up with the same man in charge as it started the cycle with: Sam Altman, who only last Friday the board said was sensationally fired because he was “not consistently candid” in his business dealings.

The lack of specificity – what that lack of candour related to – still hasn’t been fully explained but helped kickstart a series of dominos falling, and frantic social media speculation over more lurid allegations around Altman’s life that was fuelled in part by his estranged sister and now sex worker Annie.

“In five days OpenAI went from a not consistently candid CEO to its CTO as CEO, to another CEO who tweeted about preferring Nazis over rogue AI, back to the not consistently candid CEO,” says Noah Giansiracusa, a professor at Bentley University tracking the developments in the world of AI.

The sequence of events around the abortive takeover has been chaotic but left standing at the end of it remains Altman, the totemic 38-year-old CEO and co-founder of OpenAI, the $80bn company leading the generative AI revolution. Altman has been a mainstay at the company since helping found it in 2015, leading its development from a small non-profit focused on developing artificial intelligence with a largely academic staff to one that can attract upwards of $13bn of investment from big tech giants like Microsoft.

Launched on 30 November last year, ChatGPT gained 1 million users in five days and hit paydirt
Launched on 30 November last year, ChatGPT gained 1 million users in five days and hit paydirt (AFP/Getty)

A dealmaker throughout his career, Altman first started causing waves in tech in 2005 after he dropped out of his studies at Stanford University to found a location-based social media company called Loopt. Although Loopt gained funding from the prestigious Silicon Valley start-up accelerator Y Combinator, it wouldn’t prove a huge success. However, it did give him the skills needed to succeed in business, and plenty of contacts, including fashion designer Dianne von Fürstenberg, who has compared him to Albert Einstein for his ability to foresee the future. “He is the major connector between the future and the past,” she told one journalist.

Altman began working for Y Combinator, the organisation that had financially supported his start-up, in 2011, rising to become president of the firm by 2014. For four years, from 2015 to 2019, Altman juggled that role with one as the co-founder of OpenAI, which he eventually became full-time CEO of in 2019.

Founded in 2015 as a response to the worry that Google was about to monopolise the fast-developing AI market, OpenAI was initially a non-profit funded by Elon Musk. However, it didn’t take long for red flags to emerge over where the direction of this revolutionary tech was going and Musk pulled the plug on the cash in 2018 over concerns that Altman was prioritising pacey development of AI over people.

It was shocking to all that this type of coup would take place, led by Ilya [Sutskever] who most of us also respect

Indira Negi, tech investor

Alternative funders, including Microsoft, were brought on board by Altman, who supported a new vision of a capped profit model. It meant investors could make a profit tied to their income, with a board attached to the non-profit entity checking the profit-making arm was developing AI responsibly. That funders were willing to back Altman and his company was a reflection of a magnetic, affable personality built up over his time funding other firms. Altman was described by the founder of Y Combinator as possessing such a persuasive personality that “you could parachute him into an island full of cannibals and come back in five years and he’d be the king”.

OpenAI unveiled its big idea – ChatGPT – to the world on 30 November 2022. Internally, there were worries it wouldn’t be a success (with employees taking bets about how many people would use the tool, the highest guess being 100,000). It ended up gaining one million users in five days and hit paydirt.

“OpenAI was ushering in the next revolution, they were executing perfectly, and Sam was a beloved leader,” says Indira Negi, an investor at the Bill & Melinda Gates Foundation, the investment company set up by former Microsoft CEO Bill Gates.

But then, almost a year to the day, Altman was fired on 17 November. “It was shocking to all that this type of coup would take place, led by Ilya [Sutskever] who most of us also respect,” Negi explained when it took place.

Negi said that Altman’s replacement would “slow down progress quite a bit, but maybe there was a good reason to slow it down and we will find out in the coming days.”

Slowing down the pace of development appears to have been the goal of the OpenAI board’s shock intervention when it chose to oust Altman as CEO. While it’s still not yet fully clear, nearly a week on from the shock move, why the board decided to intervene in such a way, it’s widely believed that it was over a disagreement between two of the most powerful men within OpenAI: Altman and his chief scientist, Sutskever, and the different views of IA development that they represent.

Smooth operator: Altman chats with UK prime minister Rishi Sunak during the AI Safety Summit at Bletchley Park on 2 November
Smooth operator: Altman chats with UK prime minister Rishi Sunak during the AI Safety Summit at Bletchley Park on 2 November (Getty)

Both men come from wildly different backgrounds: while Altman was a smooth operator who had become used to hobnobbing with financiers when raising funding for his own start-up and later corralling investors to support other Silicon Valley start-up firms, the more dour Soviet Union-born Sutskever was working away on tricky artificial intelligence research at university laboratories.

Sutskever studied under famed researcher and “godfather of AI” Geoffrey Hinton, who spun out an AI research company from the University of Toronto that was acquired by Google. When Altman began scouting around for talent to launch OpenAI in 2015, Sutskever was integral to bringing many leading academic researchers into the company.

Earlier this year, Sutskever, who also holds a seat on OpenAI’s board, was put to work on what became a vital team within OpenAI: heading its superalignment group, designed to act as a watchdog to ensure AI doesn’t spool off into a self-controlling system that could enslave and harm humanity. (The threat of so-called artificial general intelligence (AGI) becoming a Frankenstein’s monster that harms its creator is a niche belief, even if strongly held within that niche in the AI community, which gravitates around a group called Effective Altruism.)

It was the logical conclusion for Sutskever to head the team charged with oversight of OpenAI’s development, given his increasingly cautious view about the potential risks of AI. In a documentary that followed him for a number of years, Sutskever said it was inevitable to “see dramatically more intelligent systems and I think it’s highly likely that those systems will have [a] completely astronomical impact on society.” He added that programming beneficent beliefs into the first systems to reach AGI would be crucial. “If this is not done, then the nature of evolution, of natural selection, favour those systems that prioritise their own survival above all else,” he explained.

OpenAI has been encouraged to optimise models for Microsoft and grow its valuation to receive more funding in pursuit of that mission

Brendan Burke, senior analyst at PitchBook

Altman, for his part, shares some of those doomerist worries. A known “prepper”, his mother has said that her son has long been worried about existential risk. The man himself admits it, too. “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defence Force, and a big patch of land in Big Sur I can fly to,” he told The New Yorker in 2016. Unlike many tech bros, Altman has denied having a bunker to which he plans to escape in the event of a major incident – though he did say he has “structures” which could serve the same purpose.

That fear may have been quelled from his time turning OpenAI into a profitable, all-powerful company – and could be behind the schism between both parties that caused the past week’s chaos.

It seems possible that Sutskever decided that OpenAI under Altman was ignoring the need to encode AI with safety checks to prevent such perceived concerns, and took action, triggering the board – whose constitution dictates it is responsible for ensuring OpenAI develops AGI “that is safe and benefits all of humanity” – to act.

Sceptics of the concept of killer AI suggest that this has been an overreaction caused by Eeyoreish worriers believing their own hype. “Doomers, funded by Effective Altruism, have become media heroes, policy lobbyists, and now destroyers of a leading AI company,” says Nirit Weiss-Blatt, author of The Techlash and Crisis Communication.

In part, the connection with Microsoft – which has invested more than $10bn into OpenAI – may have pushed tensions within OpenAI to breaking point, believes Brendan Burke, senior analyst for emerging technology research at PitchBook. “OpenAI’s relationship with Microsoft caused both philosophical and practical tension with the non-profit’s stated goals to advance AGI for social benefit and protect AI safety,” he says. “OpenAI has been encouraged to optimise models for Microsoft and grow its valuation to receive more funding in pursuit of that mission.”

Tech tensions: it seems possible that Ilya Sutskever (right) decided that OpenAI under Altman was ignoring the need to encode AI with safety checks
Tech tensions: it seems possible that Ilya Sutskever (right) decided that OpenAI under Altman was ignoring the need to encode AI with safety checks (AFP/Getty)

However, the surprisingly strong reaction within OpenAI ranks to Altman’s firing highlights that, despite any perceived issues, he has a hold on the company and its workers. This ability to win over colleagues and inspire loyalty isn’t always beneficent, however, according to those who have previously worked for him. One former OpenAI employee, Geoffrey Irving,has said that Altman was nice but “deceptive” and “manipulative”, and lied to him on several occasions. One tech industry insider who interviewed Altman many years ago, and asked not to be named because it would damage their likelihood of getting a subsequent interview, said that, while pleasant, they felt he was being insincere in their conversation.

Nevertheless, Altman’s sway seems to have been significant – and enough to convince OpenAI’s board to take a 180-degree turn on their decision to fire him and bring him back as CEO late on 21 November.

After being fired and replaced by a temporary CEO, Emmett Shear, on Sunday, Altman was offered a job by Microsoft, the company that reportedly owns a 49 per cent stake in OpenAI, alongside his lieutenant, Greg Brockman, who resigned when he learned that Altman was sacked. Both would be given a budget to create an AI lab of their own within the company.

A big secret is that you can bend the world to your will a surprising percentage of the time – most people don’t even try

Sam Altman

As the week began, OpenAI staff were in total revolt. They declined to attend a meeting at company headquarters to welcome Shear, the new CEO, and responded to a message announcing his installation on an internal chat system with huge numbers of middle finger emojis. In total, 96 per cent of OpenAI employees signed an open letter asking the board to quit, otherwise they would. The letter claimed that Microsoft had a standing offer to any OpenAI workers who wanted to jump ship.

That eventually compelled the board to change tack. After being unable to come to terms with Altman on a return on Sunday, they were willing to accede to some of his demands, including an overhaul of the board, on Tuesday.

Almost all board members would step down, bar one: Adam D’Angelo, CEO of question-answering website Quora. Two new members would join the board, which would continue to act under its guiding principles, first established when the company was a non-profit. Up to nine others will be added in the coming weeks and months. Altman will return as CEO, and Brockman will return as president of the company, but not to the board. An internal investigation will be launched into the alleged conduct that caused the board to act to remove Altman in the first place.

It is a victory for Altman, who it appears has joined a group of people in the tech world previously reserved for names like Gates and Apple’s Steve Jobs, who have an outsized impact on the direction of their company and foster huge, messianic loyalty among staff. Altman, who is openly gay and is in a relationship with Australian software engineer Oliver Mulherin, is reportedly worth around $500m. That pales into insignificance in comparison with the value of his companies, in part because of the non-profit and capped-profit models under which they operate.

Brain power: OpenAI’s ChatGPT language model offers users a way to engage with advanced AI in a conversational way
Brain power: OpenAI’s ChatGPT language model offers users a way to engage with advanced AI in a conversational way (Getty/iStock)

Beyond Altman winning the war to retain control of his company, others also benefit, says Negi. “This is the best possible outcome to restore OpenAI where they can execute again and all the companies that depend on them can function effectively, including Microsoft,” she says. “In fact, especially Microsoft.”

Supremely sure of his mission writing on his blog in January 2013 in a post headlined “How to Be Successful”, Altman boasted: “A big secret is that you can bend the world to your will a surprising percentage of the time – most people don’t even try.” The last few days may be good for Altman, and good for OpenAI’s 700-odd staff, whose jobs are still secure. But whether it’s good for the rest of us is still up for debate. With OpenAI the dominant company in the AI space, what Altman says is increasingly likely to shape all of our futures.

“The turmoil at OpenAI clearly demonstrates the known instabilities in AI governance models as well as the current outsized role of personal power dynamics,” says Andrew Hundt, a computing innovation fellow at Carnegie Mellon University, who has been watching – like all of us – with a hand over one eye at how things have unfolded.

He says: “We urgently need substantive and innovative – not captured – government regulatory oversight to ensure both AI tech [companies] and the organisations behind them operate in a way that is safe, effective, and just.” Because one attempt to rein in Altman over perceived risks, whether real or not, has failed, and it means that the bar to taking action in the future just got a lot higher.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in