Claude Sonnet 4.5 Isn’t Just “Good at Code.” It’s Rewriting How We Work With Computers

Pentagon Told Anthropic They Were “Very Close” on AI Deal, One Week After Trump Cut Ties: Court Filing

The private emails tell a very different story than the public statements. And that contradiction could reshape how every AI company in America deals with the U.S. military.

Court filings dropped Friday evening in Anthropic’s lawsuit against the Department of Defense, and they contain a detail that should make everyone paying attention to the AI industry sit up straight. On March 4, the day after the Pentagon formally finalized its designation of Anthropic as a “supply chain risk,” Under Secretary Emil Michael emailed CEO Dario Amodei to say the two sides were “very close” on the two issues the government now cites as proof that Anthropic is a national security threat: its positions on autonomous weapons and mass surveillance of American citizens.

One week earlier, President Trump had publicly ordered all federal agencies to cease using Anthropic’s products. Defense Secretary Pete Hegseth had declared the company a supply chain risk, a designation typically reserved for foreign adversaries like Huawei or Kaspersky. And yet behind the scenes, a senior Pentagon official was telling Amodei they were nearly at a deal.

The question that should keep every AI executive in America awake tonight: if the government can brand you a national security threat while simultaneously telling you in private that you’re almost in agreement, what exactly are the rules?

How We Got Here

The backstory is a collision between ideology and technology policy that has been building for months.

Last summer, Anthropic landed a contract worth up to $200 million with the Pentagon. Claude, the company’s flagship AI model, became the first commercial large language model deployed across the Department of Defense’s classified network. By all accounts, the technology worked. The Pentagon liked it.

The problem was the fine print. Anthropic’s acceptable use policy contained two redlines the company refused to negotiate away: Claude could not be used for fully autonomous weapons systems, and Claude could not be used for mass domestic surveillance of American citizens. Not “we’d prefer you didn’t.” A hard contractual prohibition.

The Pentagon wanted those restrictions gone. It demanded that Anthropic agree to let the military use Claude for “any lawful purpose,” full stop. Anthropic said no.

On February 24, Hegseth summoned Amodei to the Pentagon for a meeting that, by all accounts, was cordial in tone but brutal in substance. Hegseth praised Claude’s capabilities. He also delivered an ultimatum: drop the guardrails by Friday, or the Pentagon would either sever the relationship entirely or invoke the Defense Production Act to force compliance.

Amodei walked out without budging.

Three days later, Trump posted on Truth Social that the government was done with Anthropic. Hegseth followed up on X, formally declaring the company a supply chain risk. Within hours, OpenAI CEO Sam Altman announced his company had struck a deal with the Defense Department to fill the gap.

The speed of OpenAI’s replacement deal raised its own questions. OpenAI’s contract allows its models to be used for “any lawful purpose,” the exact language Anthropic rejected. The difference between the two companies’ positions is not subtle: Anthropic sought outright bans on autonomous weapons and mass surveillance. OpenAI says it opposes both in principle but builds “technical safeguards” rather than contractual prohibitions. Some OpenAI employees, CNN reported, were “fuming” internally about the arrangement.

The Court Filings That Change Everything

Anthropic sued the Trump administration on March 9, calling the supply chain risk designation “unprecedented and unlawful.” The company argues it violates its First Amendment rights and exceeds the government’s authority.

But Friday’s filings are the ones that matter most heading into Tuesday’s hearing.

Two sworn declarations, from Anthropic’s Head of Policy Sarah Heck (a former National Security Council official under Obama) and Head of Public Sector Thiyagu Ramasamy, lay out a timeline that directly contradicts the government’s public narrative. They argue the Pentagon’s case relies on “technical misunderstandings and claims that were never actually raised during the months of negotiations.”

The centerpiece is that March 4 email from Michael. On March 5, Amodei published a statement saying the company had been having “productive conversations” with the Pentagon. The day after that, Michael posted on X that “there is no active Department of War negotiation with Anthropic.” A week later, he told CNBC there was “no chance” of renewed talks.

The gap between private communication and public posture is wide enough to drive a legal case through, and that is exactly what Anthropic intends to do.

Microsoft Steps In, And That Changes The Calculus

The lawsuit took an unexpected turn when Microsoft filed an amicus brief backing Anthropic. Microsoft, which announced plans to invest up to $5 billion in Anthropic last November, urged the court to temporarily block the supply chain risk designation. Its argument was pragmatic rather than ideological: forcing an immediate removal of Anthropic’s technology from defense systems could “hamper U.S. warfighters” and “disrupt” the military’s existing AI capabilities.

Retired military chiefs also signed on in support.

This is significant for a reason that goes beyond the immediate legal dispute. Microsoft is the Pentagon’s largest technology contractor. When Redmond tells a federal judge that the government’s actions against an AI company threaten national security readiness, it carries weight that Anthropic’s lawyers alone cannot provide.

What Tuesday’s Hearing Will Decide

Judge Rita Lin will hear arguments on Anthropic’s request for a preliminary injunction in San Francisco federal court on March 24. If she grants it, the supply chain risk designation gets frozen, and Anthropic’s existing government contracts remain intact while the case proceeds.

The stakes extend far beyond one company’s bottom line. Every AI company selling to the federal government is watching this case to understand a fundamental question: can you set ethical boundaries on how your technology is used and still do business with the U.S. military?

The Pentagon’s position is clear. If you want defense dollars, you accept defense terms. The military decides what is lawful use, not the vendor.

Anthropic’s position is equally clear. Some uses of AI are dangerous enough that a company has a right, and perhaps a responsibility, to say no, even to the Department of Defense.

The broader AI industry has not lined up neatly on either side. Google, Amazon, and Meta have stayed conspicuously silent. OpenAI has effectively sided with the Pentagon through its replacement contract. Microsoft is backing Anthropic, but through a narrow legal argument about process, not a broad endorsement of AI safety restrictions.

The Real Story No One Is Writing

Here is what most of the coverage is missing. This case is not really about Anthropic versus the Pentagon. It is about whether the United States government can use procurement policy as a weapon to force AI companies to abandon safety commitments.

The “supply chain risk” designation is the sharpest tool in that arsenal. It does not just end Anthropic’s government contracts. It requires every defense contractor in America to certify that they do not use Anthropic’s technology in any work with the Pentagon. For a company whose models are embedded in enterprise software across the defense industrial base, that is an existential threat.

If the designation stands, it creates a precedent: any AI company that sets ethical boundaries the Pentagon finds inconvenient can be labeled a national security risk and effectively blacklisted from the largest technology buyer on earth.

If Anthropic wins, it establishes that there are limits to the government’s ability to weaponize procurement against companies exercising their right to set terms of service.

Either way, Tuesday’s hearing is the most consequential moment for AI governance in 2026. And the emails that landed in Friday’s court filing just made the government’s case a lot harder to defend.

Scroll to Top