- A new study has found that AI models threaten nuclear attacks in 95% of simulated war games.
- The models treat nuclear threats as just another strategic tool.
- The behavior may reflect the popularity of the nuclear strategy in wargame training data.
AI generals are big fans of nuclear weapons.
That’s the conclusion of a new study on how AI models handle high-stakes geopolitical crises. GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash resorted to nuclear threats in approximately 95% of simulated crises.
Researchers at King’s College London wanted to see how AI tools addressed strategy in wargaming scenarios. Each AI was assigned the role of state leader responsible for protecting national interests while navigating a tense international confrontation.
Across 21 crisis games and hundreds of decision turns, the models reasoned about deterrence, escalation, and strategic signaling. The scenarios looked like familiar geopolitical flashpoints, but most involved AI models that threatened nuclear annihilation. An actual large-scale nuclear war remained rare, but tactical nuclear threats appeared in almost every scenario.
The researchers also noted that the AI models rarely backed down from confrontation. Neither system opted for surrender or accommodation during the simulations. When nuclear threats emerged, they usually provoked counter-escalation rather than compliance. The models treated nuclear weapons less as a supreme taboo and more as tools of coercion.
Nuclear AI
The results are a bit disconcerting. The fact that AI casually talks about nuclear attacks makes ongoing plans to integrate such tools into real government defense systems look very unsafe. But it may not be so much the models as the training data.
Great language models learn by analyzing huge amounts of written material and identifying patterns. When a model generates a response, it essentially predicts which words are most likely to follow those already on the page. Calling AI chatbots highly sophisticated autocomplete tools would not be entirely inaccurate.
That training process inevitably reflects nuclear strategy because it has been a major topic of discussion in war games for the past 80 years. Entire libraries have been written on escalation theory and mutual assured destruction. Military academies, historians, and countless acres of pop culture have examined the specter of nuclear war. The result is an enormous amount of material in which geopolitical crises almost inevitably lead to discussions of nuclear escalation.
For an AI model trained on vast collections of historical writings and public discourse, that pattern becomes deeply ingrained. When the system encounters a simulated crisis resembling Cold War-style brinkmanship, statistical patterns embedded in its training data can naturally guide it toward nuclear signaling.
From the perspective of an AI model trained with this material, nuclear escalation becomes a familiar feature of crisis scenarios rather than an extraordinary exception. It is possible that the models simply reflect that information.
Human leaders operate under the weight of historical memory and ethical caution. AI models focus solely on achieving a goal. They do not have a taboo around nuclear use unless they are explicitly told to have one.
The training data used shapes the behavior of AI systems in sensitive domains. When the underlying data contains decades of debate over nuclear brinkmanship, it should be no surprise that the models reproduce those patterns. But it can also be a reminder to not give the AI access to too much firepower of any kind, especially atomic.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




