- OpenAI has signed a new contract with the Pentagon
- The wording of the contract left room for AI to be used for mass domestic surveillance.
- Sam Altman is being criticized for his stance on the matter
Following Anthropic’s designation as a supply chain risk by Defense Secretary Pete Hegseth and the loss of its $200 million Pentagon contract, OpenAI is now in the firing line of its own deal with the Pentagon.
Even though OpenAI has a contractual clause prohibiting the US military from using its AI models in 2023, several OpenAI employees have revealed that their models were previously used by the Pentagon.
At the time, the Pentagon had a contract with Microsoft, who had a license to use OpenAI technology, allowing the Pentagon to access OpenAI via Azure, which was not subject to the same policies.
With Anthropic out of the picture for its refusal to allow the Pentagon to use its models for autonomous weapons systems and mass domestic surveillance, OpenAI CEO Sam Altman is now being questioned over the company’s latest contract with the US military.
In 2024, OpenAI removed the blanket ban on military use of its models and then signed a contract with Anduril allowing the deployment of its models for national security purposes.
Altman has made clear his support for Anthropic’s position to prevent Claude from being used for nefarious purposes, but the company’s new agreement with the US military left room for exactly the same purposes, sources familiar with the matter told Wired.
Current regulations have lagged behind the advances made in AI, presenting opportunities for government agencies to purchase personal information about US citizens from data brokers and then use AI models to categorize and sort the information to create highly accurate and detailed citizen profiles.
Commenting on the latest agreement signed between OpenAI and the US military, Noam Brown, a researcher at OpenAI, said: “Over the weekend it became clear that the original language of the OpenAI/DoW agreement left legitimate questions unanswered, especially around some novel ways in which AI could potentially enable lawful surveillance.”
Brown continued: “The language has now been updated to address this, but I also firmly believe that the world should not have to rely on trust in AI labs or intelligence agencies for its security.”
Sarah Shoker, former head of OpenAI’s geopolitics team, said: “The biggest losers in all of this are ordinary people and civilians in conflict zones. Our ability to understand the effects of military AI on war is and will be severely hampered due to layers of opacity caused by technical design and policy. They are black boxes all the way.”
The best identity theft protection for every budget




