Why Microsoft Is Fighting the Pentagon Over Anthropic Ban
Microsoft has stepped into the spotlight in the escalating dispute between the Pentagon and Anthropic, voicing support for the AI company after the Department of War labeled its Claude platform a “supply chain risk.”
In a court filing, Microsoft urged a federal judge to block the designation for existing contracts, warning that an immediate ban could disrupt the US military’s ongoing use of advanced AI and force technology providers to quickly reconfigure products and defense-related agreements.
The Anthropic ban has also drawn criticism from a group of 22 former senior US military officials, who argue in a court filing that the designation represents a misuse of government authority and appears to be retaliation against a private company, according to a report by the Associated Press.
The move from Washington came after Anthropic reportedly refused to permit two specific uses of its AI system: mass domestic surveillance of Americans and the development of fully autonomous weapons. The standoff has quickly evolved into a broader debate about the ethical boundaries of artificial intelligence in national defense.
A short-lived partnership
The Pentagon had been relying on Claude AI since 2024. According to Scientific American, Anthropic was the first AI company to obtain the security clearance required for government use and, until recently, it was the only large language model (LLM) used in such a capacity.
Moreover, Anthropic had reportedly been renegotiating its contract with the Department of Defense over the past few weeks. On the surface, it seemed that both parties were interested in continuing their relationship.
But that all came to an end when Secretary of War Pete Hegseth declared Anthropic a risk to the supply chain and banned federal agencies from using Anthropic’s AI technology. Such a declaration has never been levied toward a company in the US, so the feud between Anthropic and the Pentagon is definitely treading new ground.
Supporting AI across the board
Microsoft has remained consistent in its support for AI. Not only have they made numerous investments in OpenAI and its ChatGPT platform, but they’ve also recently pledged to invest up to $5 billion in Anthropic and Claude.
For its part, Microsoft would like to see a temporary restraining order issued to prevent the Pentagon from designating Anthropic a “supply chain risk.” According to a filing with the US District Court in San Francisco on March 10, the temporary restraining order is essential to minimizing disruptions in the US military’s use of AI — whether it continues using Anthropic or not.
Ethical AI usage on the highest level
The team behind Anthropic is already suing the Trump administration over the recent ban. Anthropic clearly has major ethical concerns about the requests it has received from the Pentagon, and the government’s response should serve as an immediate red flag for any other companies waiting in line to become the Pentagon’s next AI vendor.
Also read: The Pentagon’s shifting stance on AI vendors is also visible in its growing openness to OpenAI, despite its clash with Anthropic.
The post Why Microsoft Is Fighting the Pentagon Over Anthropic Ban appeared first on eWEEK.