What the Hell is Going on At OpenAI: High Profile Exits, NDAs and More

Some argue NDAs prevent former employees from discussing their work and experiences, which can limit valuable contributions to the broader field of AI research and development.


OpenAI, the company which is the leading light in development of advanced AI technologies, has been thrust into the spotlight following a series of events that have raised questions about its leadership, ethical governance, and the future of AI safety.

Follow us on LinkedIn for everything around Semiconductors & AI

The Spark That Lit the Fuse

The drama unfolded with the sudden firing of CEO Sam Altman, a move that sent shockwaves through the tech community. 

OpenAI went through a period of upheaval in late 2023 following the sudden firing of CEO Sam Altman. The reasons behind the dismissal remain unclear, but there are a few main theories:

Lack of Transparency: The official explanation from the board was that Altman wasn’t fully forthcoming with them, hindering their ability to oversee the company. Some speculate this could be related to specific projects or disagreements about the pace of development.

AI Ethics Clash: Altman is known for his optimistic views on Artificial General Intelligence (AGI). The board’s statement emphasized their commitment to safe and beneficial AI, which might suggest a clash with Altman’s approach.

Something Else Entirely: The lack of concrete information has led to speculation about other potential causes, but none have gained significant traction.

Credit: @sama on X

A String of High-Profile Departures

After Altman’s departure, Mira Murati stepped in as interim CEO. But the upheaval didn’t stop there; co-founder Greg Brockman, along with other senior researchers, handed in their resignations.

The organization also faced layoffs, affecting various team members, including those responsible for board presentations like Alex Cohen.

Several key researchers, including Jan Leike, Daniel Kokotajlo, and William Saunders also chose to resign.

The reasons behind Altman’s removal remain a mystery. Speculation points to potential disagreements with co-founder Ilya Sutskever, particularly around fundraising efforts and the strategic direction of AI development.

The aftermath was dramatic:

Employee Revolt: Over 700 employees signed a letter threatening to resign if Altman wasn’t reinstated.

Microsoft Steps In: As a major investor, Microsoft offered to hire many of the departing OpenAI employees under a new division led by Altman himself.

After couple of days, Sam returned to OpenAI.

Read More: 5 Reasons Microsoft’s Cloud Customers Are Opting for AMD Over Nvidia AI Processors – techovedas

What after Sam Rejoins OpenAI

Focus on Transparency: There was a push for increased transparency within the organization. This likely stemmed from the initial issues that led to Altman’s firing, where the board felt they weren’t kept fully informed.

New Board Structure: A completely new board was formed to oversee OpenAI. This aimed to create a fresh start and potentially address any lingering disagreements about the company’s direction.

Continued Research and Development: OpenAI continued its work on advanced AI projects, likely with a renewed emphasis on responsible development alongside innovation.

A lot of High people left OpenAI last week

Ilya Sutskever: Co-founder and longtime chief scientist at OpenAI, Sutskever announced his departure in mid-May 2024.

Jan Leike: Another prominent figure, Leike co-led the “superalignment” team focused on mitigating risks from advanced AI. He left around the same time as Sutskever.

OpenAI Transparency Test

OpenAI uses non-disclosure agreements (NDAs) with employees. These agreements prevent employees from disclosing confidential information after they leave the company.The strictness of OpenAI’s NDAs has been criticized for several reasons:

Lack of Transparency: Critics argue that strict NDAs hinder transparency about OpenAI’s research and decision-making processes. This can raise concerns about potential safety risks or ethical considerations with their AI projects.

Stifling Discussion: Some argue NDAs prevent former employees from discussing their work and experiences, which can limit valuable contributions to the broader field of AI research and development.

Employee Rights: There are concerns that NDAs might be overly broad and restrict former employees from speaking critically about the company or its leadership, potentially infringing on their right to free speech.

OpenAI’s contradiction is evident. They initially aimed for open-source development, meaning their code and research would be publicly available.. However, the shift towards NDAs represents a more a closed-door policy.

The possible motivations for OpenAI’s guarded stance could be many:

  • Protecting Intellectual Property (IP): OpenAI might be concerned about protecting valuable research or inventions from competitors.
  • Controlling the Narrative: By restricting what former employees can say, OpenAI can control how their work is perceived publicly.
  • Maintaining a Competitive Advantage: Keeping research secretive could give OpenAI an edge in the race to develop advanced AI.

Read More: Power & Speed Leap: TSMC’s N3P Process Prepares for Mass Production – techovedas

Ethical and Governance Challenges

There are growing concerns about AI safety at OpenAI.

Safety De-prioritization has emerged as a top concern among employees. There’s a growing perception that OpenAI has shifted its focus towards product development, sidelining the critical resources needed for safety research.

The Safety Team Disbanded issue is particularly troubling. Disagreements with the leadership, especially with Sam Altman, have led to the dissolution of the OpenAI Safety Team.

A Loss of Trust in Leadership has become apparent. Safety-oriented employees are expressing a lack of confidence in Sam Altman’s leadership, which has resulted in a wave of resignations and a general breakdown of trust within the organization.

Read More: TCS Elevates AI Innovation with New Global Centre of Excellence in France – techovedas


The road ahead for OpenAI is uncertain, but one thing is clear: the decisions made today will have far-reaching implications for the future of AI and its role in society. The company’s experiences serve as a cautionary tale about the complexities of leading an organization at the frontier of AI research. 

Articles: 117