For much of the past two weeks, a debate over the ethics of using AI technologies for warfare has raged online and in the halls of the Pentagon. Last Thursday the chief executive of Anthropic, Dario Amodei, published a blogpost outlining Anthropic’s red lines – the use of its models for domestic mass surveillance or fully autonomous weapons – amid pressure from Pentagon officials.
In response, the president branded Anthropic’s leaders as “leftwing nutjobs” and labelled the company a “supply-chain risk”, meaning that anyone who wants to do business with the Pentagon must cut ties with Anthropic. Amodei has promised to mount a legal challenge in response.
Pete Hegseth
On Friday the CEO of OpenAI, Sam Altman, seemed to express solidarity with Amodei’s red lines, saying on CNBC he “mostly trusted” Anthropic. But hours later, on the eve of the US strikes, OpenAI signed its own deal with the Pentagon approving its technology for “all legal uses”.
The decision is the culmination of an escalating and deeply personal battle about who controls the most powerful technology since electricity – and whether the companies that built it will have any say in how it’s deployed. “This is a critical moment and inflection point in how the [Department of War] – and the world – is going to regulate AI for military uses,” says Ben Freeman of the Quincy Institute.
OpenAI CEO Sam Altman said that his company has the same red lines as Anthropic, posting on X: “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW [Department of War] agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
The full contract between the DoW and OpenAI has not been made public, but legal experts aren’t convinced that OpenAI’s deal is safer than the one Anthropic rejected. OpenAI has approved the deployment of its technology for “all lawful uses”, a term that was reportedly a sticking point in the DoW's negotiations with Anthropic because of its breadth.
“There are arguments that mass surveillance of US citizens could be conducted completely lawfully given the procurement and availability of commercially available data about US citizens,” says Kat Duffy, a senior fellow at the Council on Foreign Relations.
The clash has laid bare a deeper problem: there is no clear legal framework governing how frontier AI systems are used in war. Instead, the rules are being negotiated in contracts, blog posts and public spats.
Pete Hegseth is sending a signal that the US is willing to ‘throw out all guardrails when it comes to AI’
Pete Hegseth is sending a signal that the US is willing to ‘throw out all guardrails when it comes to AI’
Until two weeks ago, Anthropic’s Claude was the only frontier model operating in the military’s classified systems. The Pentagon will now need to untangle deep dependencies on Anthropic’s military products at a moment of acute national security pressure and conflict in the Middle East.
As if to illustrate just how embedded Anthropic's models already are in US military systems, the President ordered a series of strikes on Iran using the company’s advanced AI models, mere hours after Donald Trump ordered all US federal agencies to blacklist Anthropic, according to the Wall Street Journal. One White House official told Axios in February that switching contractors would be “an enormous pain in the ass”.
Newsletters
Choose the newsletters you want to receive
View more
For information about how The Observer protects your data, read our Privacy Policy
On Monday, the government confirmed that Elon Musk’s xAI had signed an agreement allowing the Pentagon to use its AI model, Grok, in those classified environments. According to the Wall Street Journal, multiple federal agencies warned the White House about Grok’s reliability before the Pentagon deal was signed. The deal with OpenAI was signed four days later.
Trump said there will be a “six month phase out” for Anthropic’s products, and the business implications could stretch far beyond its $200m contract with the DoD. In theory, the “supply-chain risk” labelling could mean that the company will lose lucrative partnerships with Palantir, Nvidia, and others across the defence supply chain.
The designation originated as a tool to block foreign adversaries, including the Chinese technology company Huawei, and its application to a domestic AI company is unprecedented.
Tensions are already spilling out across Silicon Valley, and are likely to intensify given the recent US military action. On Thursday, 433 employees at OpenAI and Google signed an open letter calling on their leaders to “stand together to continue to refuse the Department of War’s current demands”. In 2018, an employee uprising forced Google to abandon Project Maven.
*
The Pentagon row has exposed how much of AI governance still rests on voluntary promises from big tech firms. Both Anthropic and OpenAI have articulated safety principles on domestic surveillance and autonomous weapons, but those principles are self-authored, and, as recent events show, open to reinterpretation under pressure.
OpenAI, which remains the market leader for non-enterprise AI chatbots, started its life as a not-for-profit research outfit. Last month its founder Sam Altman, who in 2024 said he thought “ads plus AI” was “uniquely unsettling”, announced that the company was testing a new advertising model for its ChatGPT product.
At the start of February, Anthropic took a swipe at OpenAI’s advertising plans with a Super Bowl campaign. One of the ads emphasised “Ads are coming to AI. But not to Claude”.
Now, Anthropic finds itself in a similar bind. Last week, despite its public stand against the Pentagon, Anthropic amended its voluntary safety framework, removing a clause that promised to pause the training or deployment of capable AI models if it could not guarantee adequate safety measures were in place. Sources close to Anthropic say the change was unrelated to the military talks.
Legal experts and ethicists say the events of the past two weeks highlight the need for external oversight. “We don't want to be relying on terms of service or a particular CEO’s principles to be determining how AI is or whether it is being used to support mass surveillance of US citizens,” says Duffy. “We need elected officials getting involved there and clarifying the rules of the road.”
“Congress should be passing laws that restrict the use of fully automated weapons and that restrict of course the use of any type of technology for surveillance of Americans,” says Richard Painter, chief White House ethics lawyer under George W Bush.
Without these laws, companies like Anthropic will continue to set their own terms, and redraw them when pushed.
Photographs by Alex Brandon/AP, Andrew Harnik/Pool/AFP via Getty Images




