A federal judge in San Francisco just handed Anthropic one of the most consequential legal victories in the short history of the AI industry. U.S. District Judge Rita Lin issued a preliminary injunction on Thursday ordering the Trump administration to rescind its designation of Anthropic as a “supply chain risk” and to stop forcing federal agencies to sever ties with the company. The 43-page ruling didn’t mince words. Judge Lin called the Pentagon’s actions “likely both contrary to law and arbitrary and capricious,” and found that the government’s campaign against Anthropic amounted to “classic illegal First Amendment retaliation.”
The ruling is a landmark, not because it settles the broader question of how AI companies should work with the military, but because it establishes a legal boundary that the government cannot simply destroy a company for disagreeing with it. That sounds like it should be obvious. In the current climate, it apparently isn’t.
How We Got Here: Hegseth, Autonomous Weapons, And A $200 Million Contract
The dispute traces back to a $200 million Pentagon contract for Anthropic’s Claude AI models. The Department of Defense wanted unfettered access to Claude across “all lawful purposes,” including applications in autonomous weapons systems and domestic surveillance. Anthropic drew two bright lines: it would not allow its AI to be used for fully autonomous lethal weapons, and it would not permit its models to be deployed for mass surveillance of American citizens. CEO Dario Amodei argued that current frontier AI models are not reliable enough to be entrusted with lethal autonomy. “We cannot in good conscience accede to their request,” Amodei told CNBC in February. “Allowing current models to be used in this way would endanger America’s warfighters and civilians.”
Defense Secretary Pete Hegseth didn’t take it well. In late February, Hegseth gave Anthropic a Friday deadline to back down on its AI safeguards. When Anthropic refused, Hegseth took the extraordinary step of designating the company a “supply chain risk,” a classification historically reserved for foreign adversaries like Chinese telecom firms. President Trump then ordered all federal agencies to cease using Anthropic’s products and sever ties with companies that do business with the AI maker. Anthropic became the first American company in history to receive this designation. The message was unmistakable: comply or be destroyed.
The Judge’s Ruling Is A Constitutional Firewall
Judge Lin’s ruling is devastating for the administration’s legal position. The core finding is that the government punished Anthropic for exercising its First Amendment rights, specifically for publicly criticizing the Pentagon’s contracting demands and bringing “public scrutiny” to the dispute. “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” Lin wrote. She went further, writing what may become one of the most quoted lines in AI law: “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”
The ruling also found that the supply chain risk designation was procedurally defective. The statute requires specific evidence that a company poses a security threat through its products or supply chain. The government offered no such evidence. Instead, the designation was transparently retaliatory, a punishment for Anthropic’s public refusal to lift its safety guardrails. Lin noted that the speed of the designation, coming days after Amodei’s public statements, strongly suggested it was motivated by the speech rather than any genuine security concern.
What This Means For Every AI Company Doing Business With Washington
The implications extend far beyond Anthropic. The federal government is the single largest buyer of technology in the world. Every major AI company, from OpenAI to Google DeepMind to Meta’s AI division, is either competing for federal contracts or planning to. The Anthropic case establishes that the government cannot use procurement power as a weapon against companies that publicly disagree with its policy preferences. That’s a crucial precedent in an industry where the ethical boundaries of AI deployment are still being negotiated in real time.
Before this ruling, the implicit threat was clear: if you want government money, don’t make waves. Companies that raised concerns about AI safety, pushed back on military applications, or publicly questioned government AI policy did so knowing that billions of dollars in contracts could evaporate. The Lin ruling doesn’t eliminate that power dynamic, but it creates a legal check on it. The government can choose not to buy your product. It cannot blacklist you from the entire federal ecosystem for speaking your mind.
Senator Elizabeth Warren, who sent a letter to Hegseth last week questioning the designation, called the ruling “a vindication of the principle that American companies should not be punished for having a conscience.” The tech industry’s response has been more measured publicly but significant privately. Several executives at competing AI companies told reporters this week that the Anthropic case had a chilling effect on their own willingness to push back on government requests. The injunction, at least temporarily, thaws that chill.
Anthropic’s Gamble Was Enormous, And It Paid Off
It’s worth pausing to acknowledge what Anthropic risked. The company walked away from a $200 million Pentagon contract. It endured a supply chain risk designation that could have destroyed its commercial relationships with any company that also does business with the federal government. It filed a federal lawsuit against the sitting administration during a wartime footing, knowing that the political environment was hostile to any company perceived as undermining national defense. Amodei bet the company on a constitutional argument, and for now, he won.
The financial stakes were staggering. The supply chain risk label doesn’t just cut off direct government sales. It creates a cascading effect where any company worried about its own federal contracts might preemptively drop Anthropic as a vendor to avoid contamination. Anthropic’s lawsuit argued that this was the entire point of the designation: not to address a genuine security concern, but to inflict maximum economic damage as punishment for noncompliance. The judge agreed.
The Fight Isn’t Over
A preliminary injunction is not a final judgment. It means the judge found that Anthropic is likely to win on the merits and that irreparable harm would result without immediate relief. The administration can and almost certainly will appeal. The case will proceed to trial, where the full evidentiary record will be developed. The DOJ could also try to moot the case by rescinding the designation voluntarily while finding other ways to pressure the company.
But the damage to the government’s position is already done. Judge Lin’s opinion is a 43-page blueprint for how future challenges to retaliatory procurement actions should be litigated. It creates persuasive authority that other federal judges will look to when similar disputes inevitably arise. And it establishes Dario Amodei as something unusual in Silicon Valley: a CEO who picked a fight with the federal government on principle and won the first round.
Whether that principle survives contact with the appeals court is the next chapter. But for now, the message from the Northern District of California is unambiguous: the United States government does not get to designate American companies as national security threats simply because those companies disagree with it. In the current political environment, that’s not just a legal ruling. It’s a reminder.
