- Neil deGrasse Tyson calls for global treaty to ban development of AI superintelligence
- He argues that highly advanced AI could pose risks on the scale of nuclear weapons if left unchecked.
- The debate highlights a growing tension between the rapid progress of AI and concerns about long-term security.
Neil deGrasse Tyson isn’t usually the person in the room calling for a blanket ban on anything. He is better known for explaining black holes with a smile than for defending international treaties.
But in a recent talk that has been circulating widely online (see below), the astrophysicist delivered a stern warning about artificial intelligence that sounded less like a scientific lecture and more like a line from a disaster movie.
“That branch of AI is lethal,” he said. “We have to do something about it. Nobody should build it.”
Article continues below.
The “branch” he refers to is artificial superintelligence, a hypothetical future form of AI that would surpass human intelligence in almost all domains. For Tyson, the concern is not incremental improvements in chatbots or image generators. It’s the possibility of something much more powerful, something that could outthink, outmaneuver, and potentially outlive its creators.
Most people’s daily experience with AI is that of a chatbot composing emails, a phone organizing photos, or a navigation app redirecting traffic. Tyson’s warning, however, taps into a growing debate that has moved from academic articles to mainstream conversation. How far should AI be allowed to go?
Neil DeGrasse Tyson calls for an international treaty to ban superintelligence: “That branch of AI is lethal. We have to do something about it. No one should build it. And everyone must accept that through a treaty. Treaties are not perfect, but they are the best we have as humans.” from r/ChatGPT
Super AI
The idea of banning superintelligence is not new. Researchers and public figures have been discussing this for years, often framing it as a precaution against an “intelligence explosion,” where artificial intelligence systems are rapidly improved beyond human control.
Some advocates argue that once such systems exist, it may be impossible to contain them or align them with human values. The counterargument is that these fears are speculative and risk curbing beneficial innovation.
Tyson’s contribution stands out for its clarity and a suggestion of global cooperation around a ban.
“Everyone must agree to that through a treaty,” he said. “The treaties are not perfect, but they are the best we have as human beings.”
International treaties are one of the few mechanisms that humanity has to manage existential risks. Nuclear weapons, chemical weapons and even ozone-depleting substances have been subject to global agreements. The logic is simple, even if the execution is not.
If a technology is too dangerous for one country to handle alone, then it becomes everyone’s problem. But AI is software, not a bomb, and software has a way of crossing boundaries.
AI Proliferation and Fear
High-profile voices have continually warned that AI could be dangerous enough to warrant global intervention, even as the technology becomes ubiquitous. You could use AI to plan a weekend trip or summarize a meeting, all the while hearing that the same underlying technology could one day become uncontrollable.
Tyson’s call for a treaty does not resolve that tension. If anything, it makes it worse. As regulation has often lagged behind innovation, your call for a treaty when superintelligence seems purely theoretical is not absurd. Generally, by the time governments act, a technology has already become widespread.
AI may be different in that its potential risks are discussed before its more advanced forms exist. That creates an opportunity, but also a dilemma. Acting too soon could stifle progress. Acting too late could make control impossible.
What Tyson suggests is that the answer should not be left to chance. But like most collective decisions, it is likely to be confusing, contested and far from unanimous.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




