The OpenAI Crisis is Over – But The AI Ethical Issues Remain

After four and a half crazy days, the OpenAI crisis is over. Sam Altman and Greg Brockman are back at the company, which avoided being swallowed up by Microsoft with its offer to hire all employees. There are both new and dismissed board members, and it appears Altman and Ilya Sutskever have patched up their differences.

The story is filled with twists and turns that, as Derek Thompson said in The Atlantic, may make you want to pop a couple of Dramamines. The most bizarre part was the complete about-face by Sutskever. As Thompson described it,

On X (formerly Twitter), Sutskever posted an apology to the entire company, writing, “I deeply regret my participation in the board’s actions.” Altman replied with three red hearts. One imagines Brutus, halfway through the stabbing of Caesar, pulling out the knife, offering Caesar some gauze, lightly stabbing him again, and then finally breaking down in apologetic tears and demanding that imperial doctors suture the stomach wound. (Soon after, in post-op, Caesar dispatches a courier to send Brutus a brief message inked on papyrus: “<3.”)

Here’s a quick account of the events and thoughts on where the OpenAI crisis – and artificial intelligence – is headed next. The story is on pause for now, but we don’t think it is the end.

A Timeline of The OpenAI Crisis

From Friday to early Wednesday morning was a whirlwind of events and emotions that ranged from Sam Altman returning as CEO to never stepping foot in OpenAI’s headquarters again.

  • Sam Altman’s Unexpected Departure: Sam Altman’s abrupt removal as CEO shocks the tech world, with the lack of dismissal reasons causing investor unrest. (Initial tweet)
  • Mira Murati Steps In: After Altman’s exit, CTO Mira Murati became interim CEO. Co-founder Greg Brockman resigned, and Ilya Sutskever, co-founder and chief scientist, seems to have led the coup. (Documentary link)
  • Backing for Altman: Tech leaders, investors, and OpenAI staff strongly supported Altman, with Microsoft spearheading reinstatement efforts. Altman expressed affection for his OpenAI tenure on Twitter.
  • Greg Brockman’s Response: Brockman outlined the rapidly unfolding events on Twitter.

Saturday

  • Board Considers Altman’s Return: As the OpenAI crisis unfolded, a Verge report indicated the board might reinstate Altman as CEO.
  • Ultimatum from Investors and Staff: Investors and OpenAI staff demanded board resignations and Altman’s reinstatement, threatening funding and a staff walkout.
  • Board Hesitates: Despite pressures, the board hesitated to revert its decision, skipping a 5:00 PM deadline and maintaining silence till Sunday.
  • Widespread Support for Altman: Altman received overwhelming support from OpenAI employees following his dismissal and tweeted out his love for OpenAI.

Sunday

  • Leadership Clash: Things seem to worsen as the turmoil is linked to divergent views within OpenAI’s leadership, especially between Sam Altman and Ilya Sutskever.
  • Altman Takes a Dig at OpenAI: Altman returns to OpenAI headquarters, posting an image of himself with a Guest Pass – saying this will be the first and last time he will wear this.
  • Things Fall Apart: OpenAI announces that Emmett Shear, ex-Twitch CEO, is appointed as the new CEO, not reinstating Altman. Shear seems in sync with some board members as he embraces a form of AI doomerism.
  • Microsoft’s Strategic Move: Satya Nadella announced Altman and Brockman are joining Microsoft to lead a new advanced AI research lab and then offered positions to all OpenAI staff, which would gut the company.

Monday

  • Ilya’s Regret: Ilya expressed remorse on Twitter, vowing to reunite OpenAI. “I love everything we’ve built together, and I will do everything I can to reunite the company.”
  • Microsoft Shares Rise: News of Altman and Brockman’s Microsoft move pushed Microsoft stock up by 2.5% on Monday morning. By taking OpenAI’s entire staff, they would acquire the company at a massive discount.
  • CEO Emmett Shear’s Thoughts: Shear, who has kept a low profile up to now, shares his perspective on Twitter.
  • Mass Resignation Threat: Over 700 OpenAI employees threatened to resign in an open letter, demanding board resignations.
  • Ilya’s Bizarre Move: Ilya Sutskever, who has already tweeted his regret at the firing of Altman, signs the letter, essentially asking him and other board members to leave.
  • Altman’s Potential Return: The Verge reported Ilya Sutskever’s support for Altman, pending further board member agreement. Negotiations between Altman and the board continue.

Tuesday

  • Musk’s Controversial Leak: Elon Musk shared a link to a Github page with an unverified letter alleging misuse of power by Altman and Brockman. (the rumor has been neither confirmed nor denied).
  • Employee Pressure Increases: The OpenAI crisis peaked with the news that 738 OpenAI employees had signed the letter threatening resignation. A rumor spreads that some staff were coerced into signing it.
  • Helen Toner’s Involvement: It turns out that board member Helen Toner had recently written an academic paper critical of OpenAI and supportive of competitor Anthropic. It appears that Altman and Toner had battled in the months leading up to his dismissal, and there were rumors that she colluded with people from Anthropic and was okay if OpenAI was destroyed.
  • Emment Shear’s Ultimatum: New CEO Emmett Shear threatens to resign within 48 hours unless the board justifies through substantial evidence why Altman was dismissed.
  • Failed Merger Attempt: A report by The Information revealed a failed merger attempt between OpenAI and Anthropic, a rival started by former OpenAI staff.
  • Legal Scrutiny: The board’s blog post against Altman led to law enforcement inquiries, with the board unable to provide specifics.

Wednesday

  • Altman’s Return: OpenAI announced Altman’s reinstatement at 1:03 AM Wednesday morning. A new board is created featuring Bret Taylor, Larry Summers, and Adam D’Angelo. Concern is expressed at D’Angelo being on the new board since he appears to have pushed for Altman’s termination.
  • Sam and Satya’s Tweets: Altman and Microsoft CEO Satya Nadella shared their thoughts on Twitter.
  • Emmett Shear’s Reflection: Shear comments on his tumultuous experience over 72 hours.

Some Thoughts on Where We Go From Here

Sam Altman (L), US entrepreneur, investor, programmer, and founder and CEO of artificial intelligence company OpenAI, and the company’s co-founder and chief scientist Ilya Sutskever, speak together at Tel Aviv University in Tel Aviv on June 5, 2023. For now the OpenAI crisis is over.
Sam Altman (L), US entrepreneur, investor, programmer, and founder and CEO of artificial intelligence company OpenAI, and the company’s co-founder and chief scientist Ilya Sutskever, speak together at Tel Aviv University in Tel Aviv on June 5, 2023. Jack Guez | Afp | Getty Images

After this bizarre reversal, OpenAI is functioning once again. Here are some developments to keep an eye on.

A New Board for OpenAI

New board members will bring experience and, hopefully, stability to OpenAI.

  • Bret Taylor, the new board chair, is known for co-founding Quip, and his significant role at Salesforce has a diverse background in the tech industry. Recently, he started an artificial intelligence venture with a former Google executive in February.
  • Larry Summers, the former Treasury Secretary in the Clinton administration and President of Harvard University, has praised OpenAI’s ChatGPT as a groundbreaking technological advancement akin to the printing press and electricity. His experience in government will be beneficial for OpenAI amidst increasing regulatory scrutiny.
  • Adam D’Angelo, currently the CEO of Quora and a pioneer in AI chat technology with his development of Poe, is the only member of the previous board to still hold a seat. He is probably the most significant question mark in the OpenAI crisis since he may have had a role in firing Altman.
  • Additional board members will be appointed as investors – especially Microsoft and Thrive Capital – want representation.

OpenAI’s Structure Does Not Change

The OpenAI crisis appears to directly result from how the AI startup is structured as a nonprofit organization with a for-profit unit. None of that changes with the new board and Sam Altman’s return. The mission of the board clearly states:

Our primary fiduciary duty is to humanity.

As Generative AI continues to develop at a near-frenetic pace, there may be further tensions between the board and leadership. But the new board members are more decidedly in the “Tech is Good” camp, which will give Altman the space he wanted to release new products.

The Effective Altruism Crowd is Gone – The Issues Remain

The two board members intent on pushing Sam Altman out the door, Helen Toner and Tasha Mccauley, were both deeply enmeshed in the ‘effective altruism’ movement. Toner, a researcher and director at Georgetown University’s Center for Security and Emerging Technology, previously worked as an AI policy advisor at Open Philanthropy. In an October paper, she raised concerns about OpenAI’s launch of ChatGPT, suggesting it might have pressured other tech companies to hasten the development of their own chatbots, potentially bypassing safety and ethical concerns. The criticism resulted in arguments between her and Sam Altman, and it appears she was willing to destroy OpenAI if that is what was required to fulfill the mission of the board.

Of course, the reality is that if OpenAI does not continue to release products, other AI startups will, making the organization irrelevant to the discussions we need to have over ethical guidelines and potential regulatory actions.

As Vox noted, this is the fundamental dilemma that resides at the heart of the OpenAI crisis.

The charter also states, ‘We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.’ But it also paradoxically states, ‘To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities.’

So the paradox here is that everyone wants to win the race, but the only way to influence the results is to be at the front of the pack.

Higher Education’s Role and the OpenAI Crisis

You could also see Helen Toner’s work as a last-gasp effort by higher education to shape the development of AI. But it should come as no surprise that education no longer has a spokesperson at OpenAI. As history shows us, education organizations often give rise to new technologies through their research but seldom impact how they are implemented and used. Oxford, Cambridge, and Harvard had little influence in the 19th century on how the invention of the railroads and the telegraph transformed society, and the same will be true of contemporary higher education and the development of generative AI. The only power that remains is to shape how students will use it.

The OpenAI Crisis and the Ethical Dilemma of AI

Are the efforts to develop AI in a responsible manner as fragile as the past four days revealed? The past four days were not very encouraging.

As Alex Kantrowitz put it in CMSWIRE,

OpenAI’s board was supposed to save us from an AI apocalypse. Then, it couldn’t think three steps ahead in a boardroom coup. Much of the blame rests with the specific individuals. But more broadly, it’s hard to imagine anyone will have confidence in our ability to stop harmful AI should we develop it. (And what if the board’s concerns in this area were legitimate?) The future of the AI safety field is in flux.

Not a very reassuring thought as the OpenAI crisis wraps up (just in time for the Thanksgiving Holiday for those of us in the States). The tension between the rapid development of AI and the ethical concerns will surface again and again – at OpenAI and elsewhere. It will only be resolved when we can work out a way to balance the power of an utterly remarkable innovation with the potential dangers of its misuse in the hands of everyone.