- Dell CEO Michael Dell Answered a Question About Anthropic on a Forum
- The CEO said companies should not dictate how governments use their technology
- Dell added that it is not a “viable model”
Dell’s CEO has said in a Bloomberg Television interview that companies that do business with the government cannot dictate how their technology is used.
Michael Dell added, “I just don’t think it’s a viable model,” when asked about Anthropic’s ongoing battle against the Pentagon’s designation of the company as a “supply chain risk.”
Speaking at a forum in Washington, the CEO did not mention Anthropic by name, and Dell added that his company has systems and controls in place to ensure sales go only to authorized users, but did not elaborate.
Article continues below.
The anthropic battle
Defense Secretary Pete Hegseth recently called Anthropic a “supply chain risk” after the artificial intelligence company refused to budge on allowing the US government to use its Claude model for mass domestic surveillance and fully autonomous weapons systems.
The designation, along with President Donald Trump’s issuance of an executive order for all government agencies to stop using Anthropic’s technology, has resulted in Anthropic filing two lawsuits against the US government in an attempt to overturn the designation.
Supply chain risk designation is typically reserved for foreign companies that are at risk of abuse by adversaries, the most notable example being US sanctions and designations against Huawei.
What happens next?
By labeling Anthropic a supply chain risk, the Trump administration is setting a dangerous precedent. Either companies are forced to comply with the US government’s intended use of a company’s product, as was the case with the latest OpenAI contract, or companies do not renew their contracts and the government acquires technology from a different company.
Those in the know will remember how Google ended its partnership with the US military after an internal petition reached more than 4,000 signatures regarding the company’s involvement in Project Maven. The project involved AI image recognition software developed by Google being used for drone strikes in the Middle East.
Google decided to let its contract expire without renewal, and the US government turned to other companies, including Palantir, Anduril, Amazon Web Services and Anthropic, to fill the void.
Now, in the wake of the anthropic situation, almost 1,000 Google and OpenAI employees have signed letters calling for clear limits on military uses of AI. If these companies give in to their employees’ demands, they could face the wrath of the US government. On the other hand, they may face a mass exodus of employees if their demands are not met.
One result that the US government may not have recognized in its dealings with Anthropic is that the companies may now be less willing to work alongside the US Department of Defense due to fears that their technology could be used for purposes that their terms of service explicitly prohibit.
The best password manager for every budget




