← AI News

March 2026

Anthropic Wins Injunction in Court Battle With Trump Administration

A federal judge ruled that the government's measures appear designed to punish Anthropic in a standoff over military use of AI. The preliminary injunction blocks the Pentagon's "supply chain risk" designation.

U.S. District Judge Rita Lin granted Anthropic a preliminary injunction on March 26, 2026, blocking the Trump administration from enforcing a designation that had effectively banned the company's Claude AI from all government use. The ruling came after weeks of escalation between the AI company and the Department of Defense over what restrictions should apply to military applications of the technology.

The dispute began in February 2026 during contract negotiations between Anthropic and the Pentagon. Anthropic refused to agree to "all lawful use" contract clauses, citing safety concerns about two specific applications: fully autonomous weapons without human oversight and mass domestic surveillance. The company argued that Claude had not been adequately tested for those purposes and that removing its safety guardrails in those areas would be irresponsible.

On February 27, Defense Secretary Pete Hegseth designated Anthropic as a "national security supply chain risk," a classification that prohibited the Department of Defense and all government contractors from using the company's technology. President Trump directed federal agencies to cease Anthropic use the same day.

What is a supply chain risk designation? Under federal procurement rules, the government can flag companies as risks to the national security supply chain. The designation goes beyond simply ending a contract. It bars not just direct government use, but also use by any company that does business with the government. For a technology company, it functions as an effective ban from the entire federal ecosystem.

Anthropic filed suit on March 9, alleging First Amendment retaliation, Administrative Procedure Act violations, and Fifth Amendment due process violations. The company argued that the designation was "unprecedented and unlawful" retaliation for advocating responsible AI practices.

The government countered that the dispute involved "contract negotiations and national security concerns rather than retaliation," and that officials were worried about Anthropic's "potential future conduct" if the company retained access to government infrastructure.

At the March 24 hearing in San Francisco, Judge Lin was skeptical of the government's arguments. She questioned whether Anthropic faced punishment specifically for criticizing the government publicly, and referenced an amicus brief that described the situation as "attempted corporate murder." "I don't know if it's murder," she said, "but it looks like an attempt to cripple Anthropic."

In her ruling two days later, Judge Lin wrote: "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation."

Microsoft, Google, OpenAI researchers, and retired military officers filed amicus briefs supporting Anthropic. The American Federation of Government Employees argued that the administration used national security as a "pretext for retaliation."

Why this matters for students. This case sits at the intersection of AI safety, government power, and corporate speech. Anthropic built safety restrictions into its product and refused to remove them when a government customer demanded it. The government responded by attempting to cut the company off from the entire federal market. The court ruled that the government cannot use procurement authority to punish companies for their public positions on how their technology should be used. For students entering careers where AI is increasingly present, the question of who decides the boundaries of AI deployment, and what happens when those boundaries are contested, is not theoretical.