- Pentagon and Anthropic at odds over use of Claude
- AI model was reportedly used to capture Nicolás Maduro
- Anthropic refuses to allow its models to be used in “fully autonomous weapons and mass domestic surveillance”
A rift has emerged between the Pentagon and several artificial intelligence companies over how their models can be used as part of operations.
The Pentagon has asked AI vendors Anthropic, OpenAI, Google and xAI to allow the use of their models for “all lawful purposes.”
Anthropic has expressed fears that its Claude models will be used in autonomous weapons systems and mass domestic surveillance, and the Pentagon threatened to terminate its $200 million contract with the artificial intelligence provider in response.
A $200 million showdown over AI weapons
Speaking to Axios, an anonymous Trump administration adviser said one of the companies has agreed to allow the Pentagon to fully use its model, and the other two are showing flexibility in how their AI models can be used.
The Pentagon’s relationship with Anthropic has been rocked since January over the use of its Claude models, with the Wall Street Journal reporting that Claude was used in the US military operation to capture then-Venezuelan President Nicolás Maduro.
An Anthropic spokesperson told Axios that the company “has not discussed using Claude for specific operations with the War Department.” The company stated that its Usage Policy with the Pentagon was under review, with specific reference to “our strict limits around fully autonomous weapons and massive internal surveillance.”
Chief Pentagon spokesman Sean Parnell said, “Our nation requires our partners to be willing to help our warfighters win any fight.”
Security experts, policymakers and Anthropic CEO Dario Amodei have called for greater regulation of AI development and increased protection requirements, with specific reference to the use of AI in weapons systems and military technology.
The best parental controls for every budget




