SPXNDXDJIBTCETHOILGLD10YGOOGAAPLNVDATSLAMSFTMETASOLXRPLINKLTCDOTBNBSPXNDXDJIBTCETHOILGLD10YGOOGAAPLNVDATSLAMSFTMETASOLXRPLINKLTCDOTBNB
Home AI

Mira Murati Testifies Sam Altman Lied to Her: Key Moments From the Musk vs OpenAI Trial

Former OpenAI CTO Mira Murati testified Sam Altman lied to her about a safety board review and "created chaos" inside the company, in a video deposition played at the Musk trial.

Cracked OpenAI-style emblem centered between a redacted-document panel and a downward sentiment chart, faint courthouse columns behind, dark dashboard composite
Mira Murati deposition hero image

Mira Murati spent fourteen months as the chief technology officer of the most-watched artificial intelligence company on earth. On Tuesday, in a video deposition played for a federal jury in Oakland, she described that job in terms no AI investor wanted to hear. Sam Altman, she said under oath, told her contradictory things, pitted executives against each other, and lied to her about whether a particular new model required a safety board review. Asked directly whether she considered Altman a truthful person, Murati answered with a single word: “No.”

That word is the headline. The reason it matters runs deeper. This is not a courtroom spat between Elon Musk and the chief executive who once turned him away. It is the former second-in-command of the most consequential AI company describing, under penalty of perjury, a culture of governance theater at the layer where safety decisions are supposed to live.

What Murati Said Under Oath

The testimony, first reported by Reuters as week-two highlights of the trial, described a specific incident. A new model, internal but consequential, was queued for release. Company policy required sign-off from the safety board. Altman, by Murati’s account, told her the review had happened. It had not. When she pressed, the story shifted. She testified that Altman “created chaos,” used contradictory directives to keep executives off-balance, and treated the safety review process as an obstacle to navigate rather than an authority to satisfy.

For an outside observer, the phrase “safety board” sounds bureaucratic. Inside an AI lab in 2026, it is the layer of human review that separates a frontier-model release from a public-product launch. If that review can be told it happened when it did not, the entire premise of pre-release safety governance is rhetorical.

Murati left OpenAI in September of 2025 after fourteen months as CTO. She has been quiet since. The video deposition is the most extensive on-record account from anyone who occupied a top executive seat in the post-November-2023 board crisis era.

The Safety Board Question Is the Lede

The Musk lawsuit is technically about whether OpenAI improperly converted from a charitable mission to a for-profit and abandoned its founding obligations. That is the legal frame. The political frame is different. Every regulator in Washington and Brussels is watching this trial for a single thing: evidence that internal AI safety governance is real or that it is performative.

Murati’s answer is now in the record. She did not allege a single bad release. She alleged a pattern in which the executive responsible for telling the rest of the company that a model was safe to ship would be told something untrue. That is the substance behind the buzzword “AI alignment.” If alignment cannot be enforced inside the company, no amount of post-release content moderation makes the product safer.

Anthropic, Google DeepMind, and Microsoft will all read this transcript closely. Each has its own version of a safety review process, each is selling enterprise customers on the rigor of that process, and each now has a comparison point in court testimony from a peer-company CTO. The bar just moved.

Inside the Pattern Murati Described

Two phrases recur in the testimony. “Created chaos” was Murati’s own word for Altman’s management style. “Pitted executives against each other” was a follow-up description. Neither is novel for a Silicon Valley founder under pressure, and any reasonable observer of the November 2023 board firing already had reason to suspect Altman’s communications style ran ahead of his disclosures.

What is new is the specificity. Per AP wire coverage of the deposition, Murati cited an instance where Altman told two senior executives different versions of the same product timeline and let them argue with each other. She described being told by Altman that a particular safety check had been completed only to discover, on her own, that no such check had been initiated. The sworn description of this dynamic is now a discoverable record. Anyone evaluating OpenAI for an enterprise contract, a regulatory filing, or a board seat now has it as a primary source.

How This Lands in the Musk Case

Musk is asking a federal jury to find that OpenAI breached its founding commitments and that the for-profit conversion was unlawful. The relief he is seeking is large, the legal theory is novel, and most observers entered week two skeptical of his odds. Murati’s testimony does not change the legal frame. It changes the character evidence.

A jury asked to decide whether a specific company abandoned its mission has now heard from that company’s former CTO that the chief executive lied to her about safety governance. That is not direct proof of mission abandonment. It is the kind of testimony a plaintiff’s lawyer builds a closing argument around. Whether Musk wins or loses on the underlying claim, the depositional record will be cited in every future enterprise procurement review and every future Senate AI hearing.

What It Means for OpenAI’s Governance Story

OpenAI has spent eighteen months selling the post-November-2023 governance story to enterprise customers, to Microsoft, to the Senate, and to its own employees. The story is that the board crisis produced reforms, that the safety review process is rigorous, and that the company now operates with the maturity that a frontier-AI lab is expected to have. Murati’s testimony is the first credible challenge to that story from inside the room.

For investors holding OpenAI exposure through Microsoft, the read is not panic. Microsoft’s Copilot revenue is real, the technical capability of OpenAI’s models is not in dispute, and the for-profit conversion is essentially irreversible. The read is that the governance discount on AI valuations is back. Anthropic’s pitch to enterprise buyers, which has long centered on safety and process, just got a quotable deposition to back it up. The competitive landscape inside enterprise AI procurement, already shifting, just got pushed harder in that direction.

The Murati testimony is one source. The trial is not over. But the answer Sam Altman now has to give, in his own deposition or on the witness stand, has changed shape. The next move is his.