- Anthropic CEO Dario Amodei doesn’t want the Pentagon to use Claude for mass domestic surveillance and autonomous weapons.
- A statement reveals Anthropic’s reasons for retaining Claude’s safety railings
- Pete Hegseth gave Anthropic until Friday to give the Department of Defense full access
Anthropic CEO Dario Amodei has issued a statement regarding the company’s ongoing disagreement with the US Department of Defense.
Amodei stated that Anthropic “cannot in good conscience agree” to the Department of Defense’s request to provide full access to its AI models, fearing they could be used for “mass domestic surveillance” and “fully autonomous weapons.”
US Defense Secretary Pete Hegseth threatened to label Anthropic a “supply chain risk” and invoke the Defense Production Act to force the company to comply.
Unprecedented threats against Anthropic
In his statement, Amodei said Anthropic has historically had a very good relationship with the US government, including being the first artificial intelligence company to deploy its models within US government networks, the National Laboratories, and the first to deploy models for national security.
Amodei also noted that the company has complied with US regulations on the use and sale of AI models to China, to the point that it chose to “give up several hundred million dollars in revenue” by preventing the Chinese Communist Party from using Claude.
“Anthropic understands that the War Department, not private companies, makes military decisions,” Amodei continued. “However, in a small set of cases, we believe that AI can undermine, rather than uphold, democratic values.”
But questions about providing the Department of Defense with full access to Claude surround the possible misuse of the model for two nefarious purposes.
Regulations surrounding AI have not caught up to the capabilities of AI models like Claude, Amodei says, which would allow the US government to deploy Claude as a tool for mass domestic surveillance.
In theory, the government could purchase highly detailed records and use artificial intelligence models to organize them into a highly accurate reflection of American citizens on a scale never seen before.
As for the use of AI in weapons systems, Amodei says they “may prove critical to our national defense,” but maintains that current AI models are “simply not reliable enough to power fully autonomous weapons.” If an AI model in charge of an autonomous weapons system were to suffer a hallucination, the responsibility would likely fall on the model’s developer.
Amodei also addresses the threats made by Hegseth, stating that “they are inherently contradictory: one labels us as a security risk; the other labels Claude as essential to national security.”
The statement concludes that Anthropic’s “strong preference is to continue serving the Department and our warfighters, with our two requested safeguards in place.”
“If the Department decides to divest Anthropic, we will work to enable a smooth transition to another vendor, avoiding any disruption to ongoing military planning, operations or other critical missions. Our models will be available on the expansive terms we have proposed for as long as necessary.”
The best identity theft protection for every budget




