- Experts trashed MIT paper for making AI claims without evidence
- Kevin Beaumont dismissed the findings as almost complete nonsense with no evidence.
- Marcus Hutchins also mocked the research, saying he laughed more when reading its methods.
The MIT Sloan School of Management was forced to withdraw a working paper that claimed AI played an “important role” in most ransomware attacks following widespread criticism from experts.
The study, co-authored by MIT researchers and Safe Security executives, alleged that “80.83 percent of recorded ransomware events were attributed to threat actors using AI.”
Published in early 2025 and subsequently cited by several media outlets, the report drew immediate scrutiny for presenting extraordinary figures with little evidence.
Dubious research
Among them was prominent security researcher Kevin Beaumont, who described the article as “absolutely ridiculous” and called its conclusions “almost complete nonsense.”
“It describes that almost all major ransomware groups use AI, without any evidence (also not true, I supervise many of them),” Beaumont wrote in a Mastodon thread.
“There is even talk of Emotet (which hasn’t existed for many years) as being powered by AI.”
Cybersecurity expert Marcus Hutchins agreed, saying, “I started laughing at the title” and “when I read your methodology, I laughed even harder.”
He also criticized the article for undermining public understanding of threats such as ransomware and malware removal practices.
Following the backlash, MIT Sloan removed the paper from its site and replaced it with a note saying it was “being updated based on some recent revisions.”
Michael Siegel, one of the authors, confirmed that revisions were underway.
“We received some recent comments on the working paper and are working as quickly as possible to provide an updated version,” Siegel said.
“The main points of the document are that the use of AI in ransomware attacks is increasing, we should find a way to measure it, and there are things companies can do now to prepare.”
In simple terms, it states that the document does not claim a definitive global percentage, but acts as a warning on how AI could be measured in cyberattacks.
Even Google’s AI-based search assistant dismissed the claim, stating that the figure was “not supported by current data.”
The controversy reflects a growing tension in cybersecurity research, where enthusiasm for AI can sometimes overtake factual analysis.
AI has genuine potential in both attack and defense, so improving ransomware protection, automated threat detection, and antivirus systems is a good move.
However, exaggerating its malicious use risks distorting priorities, especially when it comes from such prominent institutions as MIT Sloan.
Through registration

The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.



