What happens when an AI lab tries to have it both ways — and the people building the future refuse to go along for the ride.
On February 27, 2025, OpenAI signed a deal with the U.S. Department of Defense. Hours later, the internet noticed something uncomfortable: Anthropic — OpenAI’s biggest rival — had just been threatened by the Pentagon for refusing the exact same kind of agreement.
The optics were rough. One company drew a line. The other crossed it. And then, over the following days, the fallout began.
The Deal That Started It All
OpenAI’s agreement with the DoD wasn’t the first time an AI company partnered with the military — and it won’t be the last. But the timing was uniquely damaging.
Defense Secretary Pete Hegseth had reportedly warned Anthropic CEO Dario Amodei that if his company didn’t comply with Pentagon requests, it would either be declared a “supply chain risk” or face possible compulsion under the Defense Production Act. Amodei held firm — at least initially.
OpenAI, meanwhile, signed. And in the days that followed, the terms of that deal — which critics said lacked sufficient ethical safeguards around domestic surveillance — became the flashpoint for one of the most visible AI ethics crises in recent memory.
The Resignation That Made Headlines
Caitlin Kalinowski, a senior hardware executive on OpenAI’s robotics team, resigned in protest shortly after the deal was announced.
Her statement was direct: she objected to “surveillance of Americans without judicial oversight and lethal autonomy without human authorization.” These weren’t abstract concerns. They were specific categories of use she believed the deal had opened the door to — without adequate internal debate.
Think of it like discovering the contractor renovating your house quietly agreed to install a surveillance system in your neighbor’s home — and only mentioned it after they’d already signed the contract.
Kalinowski’s exit was notable not just because of her seniority, but because of what it signaled: the ethics gap between what AI companies say publicly and what they agree to privately had finally become too wide for some employees to bridge.
The Open Letter: “We Will Not Be Divided”
Kalinowski wasn’t alone. An open letter signed by over 450 employees — from both OpenAI and Google — called on their respective leadership to “put aside their differences and stand together” to refuse Pentagon demands for permission to use AI models for domestic mass surveillance and autonomous lethal decision-making without human oversight.
Roughly half of the signatories remained anonymous, which itself says something: the cost of dissent, even in writing, was high enough that hundreds of people in the industry weren’t willing to put their names on it.
The letter wasn’t anti-military. It was anti-specific-use. The signatories drew hard lines around two things: killing without human authorization, and surveilling citizens without judicial oversight. The implication was clear — everything else was potentially negotiable.
OpenAI’s Awkward Pivot
In a rare moment of public candor, Sam Altman acknowledged that the announcement had “looked opportunistic and sloppy.” OpenAI subsequently revised the agreement to explicitly prohibit domestic surveillance of U.S. persons.

Whether that revision goes far enough is a matter of genuine debate. Critics argue the original deal should never have been announced without those protections already in place. Supporters say OpenAI was always going to add guardrails — the rollout was just mishandled.
What isn’t in dispute: the company spent a week in damage-control mode, watching employees resign, letters circulate, and the press ask pointed questions about whether the company that once named itself after the concept of “open” AI had quietly become something else.
What It Means for the Broader AI Landscape
The episode exposes a fault line that’s been forming for years. AI companies built their reputations — and recruited their talent — on the promise that they were building technology for humanity’s benefit. That framing worked well when the biggest customers were startups and enterprises.
But now governments are at the table. And governments, especially defense departments, don’t just want the product. They want influence over what the product can do.
Anthropic, notably, eventually resumed negotiations with the Pentagon after initially resisting — a sign that the pressure may be too great for any individual lab to simply opt out. The question isn’t whether AI and the military will work together. It already is. The question is who sets the rules.
For employees at these companies — many of whom chose AI precisely because they believed it could be steered toward good outcomes — that’s an increasingly uncomfortable answer to sit with.
The Bigger Picture
Walkouts and open letters rarely change policy on their own. But they do something else: they make the tradeoffs visible. They force a public reckoning with questions that boardrooms would prefer to resolve quietly.
Kalinowski’s resignation and the “We Will Not Be Divided” letter didn’t stop OpenAI’s Pentagon deal. But they made clear that the social contract between AI companies and the people building their systems has limits — and those limits are now being tested in real time.
The next few years will determine whether those limits hold or whether the gravity of government contracts, revenue pressure, and geopolitical competition quietly erases them.
Sources: The Verge, Reuters, The Guardian, employee open letter “We Will Not Be Divided” (Feb/Mar 2025)
Leave a reply to The Mindful Migraine Blog Cancel reply