An unusual coalition is forming in a San Francisco courtroom — and the outcome could define the rules for AI in government for years to come.

Last week, the U.S. Department of Defense labeled Anthropic — the AI safety company behind Claude — a national security supply chain risk. It then barred the company from military contracts. President Trump followed with an order directing all federal agencies to stop using Claude entirely.
The trigger? Anthropic refused to allow unrestricted military use of its AI.
What happened next was less predictable: Microsoft, Google, OpenAI, the Cato Institute, and the Electronic Frontier Foundation all lined up in court to back Anthropic. A company that said “no” to the Pentagon now has the most powerful names in tech standing behind it.
How We Got Here
The dispute has its roots in a contract negotiation that turned unusually public. The Pentagon wanted broad, largely unrestricted access to Claude for military applications. Anthropic drew two firm lines: its AI would not be used for domestic mass surveillance, and it would not be used to initiate military action without human oversight.
Those weren’t negotiating tactics — they are codified in Anthropic’s usage policies. When talks broke down, the Defense Department didn’t quietly walk away. Instead, it escalated, invoking supply chain risk powers that have historically been reserved for foreign adversaries, not American companies in a contract dispute.
Anthropic filed suit on Monday in federal court in San Francisco. By Tuesday, Microsoft had filed a supporting brief.
The Coalition Taking Shape
Microsoft’s filing is notable for its bluntness. It argued that using a supply chain risk designation to resolve a contract dispute could impose severe economic consequences and sets a troubling precedent for how the government can pressure private companies into compliance.
Critically, Microsoft didn’t just defend Anthropic on procedural grounds. It endorsed the two ethical positions at the heart of the dispute, stating explicitly that American AI should not be used for domestic mass surveillance or to start a war without human control.
That’s a remarkable line for a major government contractor — one with billions of dollars in federal business — to put in a legal brief.
Microsoft wasn’t alone. A group of AI developers from Google and OpenAI submitted a separate filing. So did a coalition of civil liberties and policy organizations including the Cato Institute and the Electronic Frontier Foundation — ideological opposites that rarely appear on the same side of any issue.
Why the Supply Chain Designation Matters
The mechanism the Pentagon used is worth understanding. Supply chain risk designations exist to protect national security infrastructure from threats like foreign-manufactured hardware with hidden backdoors or software from adversarial states. They come with serious consequences: exclusion from contracts, reputational damage, and ripple effects across an entire industry.
Applying that label to an American AI company for declining specific contract terms is, as Microsoft put it, a use of the tool that has “never before been publicly wielded against a U.S. company.” The concern from the coalition isn’t just about Anthropic — it’s about what the precedent enables. If this stands, the designation becomes a coercive instrument any administration could use against any tech company that declines to comply with government demands.
The Bigger Question: Who Sets the Rules for AI?
Beneath the legal filings is a question the industry has been circling for years: who decides what AI can and cannot do in high-stakes contexts?
Anthropic has built its identity around the argument that AI safety constraints aren’t a product liability — they’re a feature. The company has consistently maintained that responsible deployment requires limits, and that those limits can’t be negotiated away on a contract-by-contract basis.
The Pentagon’s position, implicitly, is the opposite: that the government should be able to define the terms of use for technology it pays for, including setting aside the developer’s restrictions.
The coalition forming around Anthropic suggests the industry is not prepared to accept that framing. For companies like Microsoft and Google, the stakes are strategic as much as ethical. If federal agencies can override AI usage policies by labeling non-compliant vendors as security threats, every AI company with a government contract faces the same pressure.
What Comes Next
The immediate question is whether a federal judge will grant Anthropic’s request for a temporary halt to the supply chain designation while the case proceeds. The Pentagon has declined to comment, citing the active litigation.
Whatever the court decides, this case has already accomplished something: it forced a public debate about the boundaries of government authority over AI, and it revealed an unlikely alignment between commercial tech giants, civil liberties groups, and AI safety advocates.
The rules governing how AI gets used — and who gets to set them — are being written right now, in courtrooms and contract negotiations most people aren’t watching. This one is worth following.
Sources: Federal News Network, The Associated Press
Leave a comment