Anthropic Blacklisted, OpenAI Welcomed: Inside the Pentagon’s AI Pivot
Last Thursday, Anthropic drew a line: Claude cannot be used for mass surveillance of Americans, or for fully autonomous weapons. By Friday evening, President Trump ordered every federal agency to stop using the company’s technology.
The Pentagon went further, designating Anthropic a “supply-chain risk to national security“… a label normally reserved for companies deemed foreign adversaries to the US, like Huawei. Anthropic has vowed to challenge the designation in court, calling it “retaliatory and unprecedented.”
The backstory
Anthropic had a $200 million Pentagon contract, and Claude was the only AI model on the military’s classified networks. The Department of War wanted “all lawful use” access. Anthropic said fine, except for two things:
- No mass surveillance: Don’t use Claude to collect Americans’ geolocation, browsing, and financial data from data brokers.
- No autonomous weapons: Keep a human in the loop before anything fires.
The DoW’s “compromise”? According to Axios, the contract language they sent overnight included loopholes that would let those safeguards be overridden at will. Anthropic CEO Dario Amodei responded: “We cannot in good conscience accede to their request.”
Now here’s the ultimate twist
- Hours after the government moved against Claude…
- And after 300+ Google employees and 60+ OpenAI staffers signed an open letter urging their companies to hold the same line…
- And after Sam Altman went on TV and said it has the same redlines as Anthropic…
…OpenAI announced its own Pentagon deal. Now, Sam Altman claims this deal includes the same two red lines plus a third (no AI-powered social credit systems).
But Altman’s announcement on X was community-noted almost immediately; first, government officials contradicted his framing, saying OpenAI will actually allow “all lawful purposes.” Altman held an AMA the next day; if you want to read his answers to the specific concerns that got voiced, go check it out!
Why this matters
Hours after the ban, the US military proceeded to use Claude during air strikes on Iran.
CENTCOM had it running intelligence assessments, target identification, and combat simulations. So the gov using Claude in real combat scenarios is not hypothetical; it’s happening right now.
As MIT physicist Max Tegmark put it: the AI industry spent years lobbying against regulation, promising to govern itself. Now “we have less regulation on AI systems in America than on sandwiches.” This was by design, as the AI industry has done the opposite of regulatory capture (where the winners in an industry lobby the government to regulate downwards so competitor upstarts can’t comply): they captured the government to avoid regulation entirely.
Every AI company will eventually face this question: when the government says “give us everything,” do you comply or push back? We posit an interesting thought experiment: If a person can declare themselves a conscientious objector to being used by their country in war… could a company not do the same?
Editor’s note: This content originally ran in the newsletter of our sister publication, The Neuron. To read more from The Neuron, sign up for its newsletter here.
The post Anthropic Blacklisted, OpenAI Welcomed: Inside the Pentagon’s AI Pivot appeared first on eWEEK.