- Google Employees Sign Open Letter to CEO Over Concerns About Military Use of AI
- AI developers don’t want their technology used for ‘classified purposes’
- Google is currently negotiating a contract with Pengaton.
More than 600 Google employees signed a letter asking CEO Sundar Pichai to reject any use of its artificial intelligence technology for military purposes.
The open letter highlights the serious ethical concerns staff have, stating: “Human lives are already being lost and civil liberties are at risk at home and abroad due to the misuse of technology we are playing a key role in building.”
“As people who work in AI, we know that these systems can centralize power and that they make mistakes,” the letter said. “We believe our proximity to this technology creates a responsibility to highlight and prevent its most unethical and dangerous uses.”
Article continues below.
Another ‘supply chain risk’ designation?
In March, Anthropic CEO Dario Amodei refused to allow the Pentagon to use the Claude model over fears it could be used for “massive domestic surveillance” and “fully autonomous weapons,” prompting Defense Secretary Pete Hegseth to declare that the company posed a “supply chain risk.”
Shortly after, OpenAI stepped forward to fill the void left by Anthropic, and CEO Sam Altman faced both internal and external criticism for his apparent willingness to allow military use of ChatGPT.
OpenAI’s new contract with the Pentagon was full of holes that would have allowed the same use of ChatGPT that Anthropic feared for Claude. The contract was amended to state that OpenAI models would not be used for “intentional tracking, surveillance, or surveillance of U.S. persons or nationals, including through the acquisition or use of commercially acquired personal or identifiable information.”
Shortly after, Sam Altman told his employees that the Pentagon had said OpenAI cannot “make operational decisions” about how the military uses AI technologies.
Now, Google employees are joining the growing number of employees at AI companies and members of the public who oppose military use of AI tools. “Making the wrong decision at this time would cause irreparable harm to Google’s reputation, business, and role in the world,” the letter states.
Following protests involving Google staff in 2018, the company amended its AI Principles to state that it would not deploy its AI tools where they were “likely to cause harm” and would not “design or deploy” AI tools for surveillance or weapons. These clauses were quietly removed from its AI Principles on February 4, 2025.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds.




