Musk vs. OpenAI: Inside the Federal Trial That Is Putting AI Safety, Corporate Governance, and Sam Altman's Leadership on Trial:
"Culture of Deceit": Former Board Member Delivers Blistering Testimony in Musk vs. OpenAI Trial:
A former OpenAI safety researcher says the company drifted from its founding mission. A former board member says Sam Altman misled the people tasked with overseeing him. And a federal court in Oakland is now deciding whether Elon Musk was right all along.
The Trial That Could Reshape the Future of AI Governance:
Few legal battles in the history of technology carry as much consequence as the one unfolding right now in a federal courthouse in Oakland, California. Elon Musk's lawsuit against OpenAI — the company he co-founded and later abandoned — has moved from courtroom drama to something far more significant: a full-blown examination of whether one of the world's most powerful AI companies betrayed the principles it was built on.
At the heart of Musk's case is a foundational question that the entire AI industry is quietly terrified to answer in public: when a research organisation with a stated mission of benefiting humanity transforms into a for-profit enterprise worth hundreds of billions of dollars, does the original mission survive the transformation? Or does the pursuit of profit quietly hollow it out?
This week, the trial heard from two witnesses whose testimony brought that question into sharp focus. Rosie Campbell, a former OpenAI safety researcher, described a company that she says shifted from research-led to product-led — with safety considerations increasingly subordinated to commercial imperatives.
And Tasha McCauley, a former board member, painted a picture of a CEO who misled the very people charged with holding him accountable. Both testimonies land at a pivotal moment — not just for OpenAI, but for how society chooses to govern advanced artificial intelligence.
The Safety Researcher Who Watched OpenAI Change — Rosie Campbell's Testimony:
Rosie Campbell joined OpenAI's AGI Readiness team in 2021, arriving at a company that she describes as deeply focused on research and openly engaged with the existential questions surrounding artificial general intelligence. By 2024, when her team was disbanded, she says she was working somewhere quite different.
"When I joined, it was very research-focused and common for people to talk about AGI and safety issues. Over time it became more like a product-focused organization."
— Rosie Campbell, former OpenAI AGI Readiness team member.
Campbell's testimony is significant not just for what it says about OpenAI's culture, but because it comes from someone who was inside the organisation's safety infrastructure at a critical moment in its evolution.
Her team was disbanded in the same period that OpenAI's Super Alignment team — a group specifically formed to work on the long-term safety challenges of superintelligent AI — was also shut down. The simultaneous dissolution of two safety-focused teams is a data point that Musk's legal team has been keen to amplify.
Under cross-examination, Campbell conceded the uncomfortable reality that building AGI requires enormous capital — and that without significant commercial revenue, OpenAI's safety ambitions would be impossible to fund. But she was unequivocal on the core principle: building a superintelligent AI system without the right safety processes in place would be a betrayal of the mission she signed up to serve.
The most concrete example Campbell offered centred on a specific incident: Microsoft's deployment of a GPT-4-based model through its Bing search engine in India, which went live before the model had been evaluated by OpenAI's Deployment Safety Board. Campbell was careful to note that the model itself did not represent a catastrophic risk — but the precedent it set troubled her deeply.
"We need to set strong precedents as the technology gets more powerful. We want to have good safety processes in place we know are being followed reliably."
— Rosie Campbell, testifying in federal court.
That same GPT-4 deployment incident in India was not merely a safety concern — it was one of the red flags that contributed to OpenAI's non-profit board making the extraordinary decision to briefly fire CEO Sam Altman in November 2023. That episode, and the events that followed it, became the second major thread of testimony this week.
The Board Member Who Fired Sam Altman — Tasha McCauley's Account:
Tasha McCauley served on OpenAI's non-profit board during one of the most turbulent periods in the company's history, and her testimony this week offered the most detailed public account yet of why the board took the unprecedented step of removing Altman — and why it ultimately failed to make it stick.
McCauley described a pattern of behaviour by Altman that she says fundamentally undermined the board's ability to perform its oversight function. The specific incidents she cited are striking: Altman allegedly lied to a fellow board member about another member's intentions regarding a third member's position.
He failed to inform the board about the decision to launch ChatGPT publicly — a product that would go on to become one of the most consequential technology releases in history. And he was repeatedly accused of failing to disclose potential conflicts of interest.
"We are a non-profit board and our mandate was to be able to oversee the for-profit underneath us. Our primary way to do that was being called into question. We did not have a high degree of confidence at all to trust that the information being conveyed to us allowed us to make decisions in an informed way."
— Tasha McCauley, former OpenAI board member.
The board's attempt to remove Altman collapsed under circumstances that McCauley describes with evident frustration. The firing coincided with a tender offer to OpenAI employees, and when the company's staff rallied behind Altman and Microsoft intervened to restore him, the board's resolve fractured. The members who had voted to remove Altman ultimately stepped down themselves — leaving the CEO they had tried to fire more entrenched than ever.
McCauley's testimony on the structural failure of OpenAI's governance model goes directly to the core of Musk's legal argument: that the conversion of OpenAI from a non-profit research organisation into one of the most valuable private companies in the world represented a fundamental breach of the implicit agreement made with its original founders — including Musk himself.
Musk's Legal Argument — Why He Says OpenAI Broke Its Founding Promise:
Elon Musk's lawsuit is built on the proposition that OpenAI's founding was not merely a business transaction but a covenant — an agreement that the organisation would pursue artificial general intelligence for the benefit of humanity, not for the benefit of its shareholders. By transforming into a for-profit company without honoring that founding commitment, Musk's team argues, OpenAI broke a promise that it made to the people who made it possible.

The Hidden AI War
Nobody Is Telling You About
Our latest documentary deep-dive into the geopolitical struggle for machine intelligence dominance. Explore the two paths of AI development: open source vs. closed architecture.
David Schizer, a former Dean of Columbia Law School serving as an expert witness for Musk's team, framed the argument in precise terms. He pointed not just to the question of whether OpenAI has commercialised too aggressively, but to the process failures that allowed it to do so — including the bypassing of safety review procedures that the company publicly committed to following.
"OpenAI has emphasized that a key part of its mission is safety and they are going to prioritize safety over profits. Part of that is taking safety rules seriously — if something needs to be subject to safety review, it needs to happen. What matters is the process issue."
— David Schizer, expert witness for Musk's legal team:
OpenAI, for its part, has released public evaluations of its models and shared safety frameworks publicly — but declined to comment specifically on its current approach to AGI alignment during the trial. The company has also made notable safety-focused hires: Dylan Scandinaro, its current head of preparedness, joined from Anthropic in February. Sam Altman said the hire would let him — in his words — sleep better tonight.
In a moment that reveals the complex dynamics of the case, OpenAI's attorneys used cross-examination to get Campbell to acknowledge that, in her own assessment, OpenAI's safety approach is still superior to that of xAI — the AI company Musk founded, which was recently acquired by SpaceX. The implication was clear: whatever OpenAI's flaws, Musk's own AI venture is not obviously a safer alternative.
Beyond OpenAI — Why This Trial Matters for the Entire AI Industry:
The stakes of this case extend far beyond the personal dispute between Musk and Altman, or even the specific question of whether OpenAI has honoured its founding mission. What is really on trial here is a model of AI governance — the idea that a private company, guided by an internal non-profit board and self-imposed safety frameworks, can be trusted to develop transformative technology in the public interest.
**The testimony of both Campbell and McCauley suggests **that model has already shown serious cracks. Safety teams were disbanded. Safety review processes were bypassed. The board tasked with oversight was misled. And when it tried to act, it was overpowered by the combined force of a charismatic CEO, a loyal workforce, and the world's largest technology company.
If the people inside the building — the researchers, the board members, the safety teams — could not hold OpenAI accountable to its own stated values, what hope does anyone outside the building have?
McCauley herself raised this question in her testimony, arguing that the failures of internal governance at OpenAI should be treated as a compelling argument for stronger external regulation. Her framing was direct and unsparing:
"If it all comes down to one CEO making those decisions, and we have the public good at stake, that's very suboptimal."
— Tasha McCauley, former OpenAI board member.
Support our research
Independent analysis fueled by you.
This is a view that is gaining traction far beyond the courtroom. Governments around the world — in Europe, the United Kingdom, and increasingly in the United States — are moving toward more formal frameworks for regulating advanced AI development. The OpenAI trial is providing exactly the kind of real-world evidence that regulators need to make the case that self-governance is not sufficient when the technology in question has civilisational implications.
What This Means for AI Safety — A Sector-Wide Wake-Up Call:
The specific safety incidents described in this trial — disbanding of alignment teams, bypassing of deployment safety boards — are not unique to OpenAI. They are symptomatic of a broader tension that every major AI lab faces: the pressure to ship products quickly, generate revenue, and retain talent in an extraordinarily competitive market, set against the imperative to develop potentially dangerous technology with genuine caution.
What makes the OpenAI case so instructive is the paper trail. Because of its unusual non-profit structure, its public commitments to safety, and the extraordinary drama of the Altman firing, there is more documented evidence of internal safety deliberations at OpenAI than at virtually any other AI company. The trial is, in effect, holding a mirror up to an industry that has largely been able to operate behind closed doors.
The lesson for the broader AI industry is one that safety researchers have been articulating for years: safety culture is not a document or a framework or a team name. It is a set of behaviours that must be consistently practiced under pressure — especially commercial pressure. When safety teams are disbanded and safety review processes are bypassed not because the AI is safe enough, but because the product needs to ship, the safety culture has already failed.
The Musk vs. OpenAI trial is not just a legal dispute — it is a public reckoning with the question of whether the AI industry's self-regulatory instincts are adequate to the moment we are living in.
Key Takeaways — What You Need to Know About the OpenAI Trial:
• Elon Musk's lawsuit against OpenAI centres on the claim that the company's conversion from non-profit research lab to for-profit enterprise broke the founding agreement made with its original supporters.
• Former safety researcher Rosie Campbell testified that OpenAI shifted from a research-focused to a product-focused organisation — and that safety processes were bypassed when Microsoft deployed GPT-4 in India without proper review.
• Two safety-focused teams — the AGI Readiness team and the Super Alignment team — were both disbanded during the same period, raising serious questions about OpenAI's commitment to long-term safety research.
• Former board member Tasha McCauley testified that Sam Altman misled the board, failed to disclose conflicts of interest, and did not inform the board before launching ChatGPT — undermining the board's ability to fulfil its oversight mandate.
• The board's attempt to fire Altman in 2023 failed when employees rallied behind him and Microsoft intervened — leaving the non-profit oversight structure effectively powerless.
• McCauley argued that OpenAI's governance failures make a compelling case for stronger government regulation of advanced AI development.
• The trial's implications extend far beyond OpenAI: it is a defining test of whether private AI companies can be trusted to govern themselves — and a powerful argument for why they may not be able to.
Published: May 2026 | Tags: Elon Musk OpenAI lawsuit, AI safety, Sam Altman, OpenAI trial, AGI governance, AI regulation




