Anthropic is fighting with a big client, and it’s actually good for its brand
Can a headline-making squabble with a client actually be good for a brand? This week’s dispute between the Department of Defense and Anthropic, a high-profile player in the super-competitive field of artificial intelligence, may be just that.
The dispute involves whether the Pentagon, which has an agreement to use Anthropic technology, can apply it in a wider range of scenarios: all “lawful use” cases. Anthropic has resisted signing off on some potential scenarios, and the Pentagon has essentially accused it of being overly cautious. As it happens, that assessment basically aligns with Anthropic’s efforts (most recently via Super Bowl ads aimed squarely at prominent rival OpenAI) to burnish a reputation as a thoughtful and considered AI innovator. At a moment when the pros-vs.-cons implications and potential consequences of AI are more hotly debated than ever, Anthropic’s public image tries to straddle the divide.
Presumably Anthropic (best known to consumers for its AI chat tool Claude) would prefer to push that reputation without alienating a lucrative client. But the underlying feud concerns how the military can use Anthropic’s technology, with the company reportedly seeking limits on applications involving mass surveillance and autonomous weapons. A Pentagon spokesman told Fast Company that the military’s “relationship with Anthropic is being reviewed,” adding: “Our nation requires that our partners be willing to help our warfighters win in any fight.” The department has reportedly threatened to label Anthropic a “supply chain risk,” lumping it in with supposedly “woke” tech companies, causing potential problems not just for Anthropic but for partners like Palintir.
So far Anthropic’s basic stance amounts to: This is a uniquely potent technology whose eventualities we don’t fully comprehend, so there are limits to uses we’ll currently permit. Put more bluntly: We are not reckless.
Not moving so fast that you break important things—like user trust, or civilization—is a message that’s of a piece with the official image Anthropic has sought to cultivate. The company was founded by OpenAI refugees who argued back in 2021 that the company was prioritizing monetization over safety. Its recent Super Bowl ads are the highest-profile example of this branding so far: directly mocking OpenAI for experimenting with advertising on its consumer-facing product ChatGPT, and presenting the results as a slop-dystopian mess.
The spots were, as Fast Company’s Jeff Beer explained, a rare example of straight-up “ire slung at a category competitor.” They could arguably be the first salvo in a branding battle akin to Apple vs. Microsoft, with Anthropic seizing the role of righteous challenger. (OpenAI’s initial response included belittling Anthropic’s business, which just lends to the latter’s underdog pose.)
As a brand image to shoot for, being the responsible AI player is an understandable goal. The technology has been divisive for years at this point, and lately that’s reached a crescendo. Seen by many as a threat to privacy, a job-killer, an environmental menace, and a source of endless misinformation and slop, it’s simultaneously touted by Silicon Valley elites and their intellectual brethren as an unprecedented boon to humanity.
The only point of agreement is that the changes will be big and fast and pretty much unstoppable. And no matter how much you already believe that, there is some guy on X arguing that you still don’t really get it. No wonder there seems to be room for an AI company with a cautious message.
Of course this is branding we’re talking about, and ultimately Anthropic is under the same marketplace pressures as its rivals. And its actual behavior hasn’t always been pristine. Notably it agreed last year to pay a record $1.5 billion to settle a class-action lawsuit alleging its models trained on some 500,000 copyrighted books.
Despite its Pentagon dispute, its technology is already intertwined with the American military, and was reportedly used in the recent U.S. capture of Venezuelan strongman Nicolás Maduro. And of course it may yet acquiesce to Pentagon demands. (According to Axios, Anthropic’s annual revenue is around $14 billion, and its Department of Defense deal is pegged at $200 million—not chump change, but not existential.)
Still, the squabble is an occasion for Anthropic to demonstrate that its rhetoric and actions line up. At the very least, that could be good for its flagship chat tool Claude: Consumers tempted by AI hype but worried about its potential downsides may see Anthropic as the fledgling technology’s least-reckless major player. And given how divisive the AI category has become, that might count as a brand win.