- AI-assisted fraud has increased significantly, making phishing campaigns more convincing
- Deepfake-enabled identity attacks caused verified losses exceeding $347 million globally
- Subscription-based AI crimeware creates a stable and growing underground market
Cybercriminals are now using artificial intelligence to automate fraud, expand phishing campaigns, and industrialize phishing to a level that was previously impractical.
Unfortunately, AI-assisted attacks could be among the biggest security threats facing your business this year, but staying aware and acting promptly can allow you to stay one step ahead.
Group-IB’s Weaponized AI report shows that the growing use of AI by criminals represents a clear fifth wave of cybercrime, driven by the commercial availability of AI tools rather than isolated experimentation.
Rise in AI-driven cybercrime activity
Evidence from dark web monitoring shows that AI-related cybercrime activity is not a short-term response to new technologies.
Group-IB says first posts on the dark web referencing AI-related keywords increased by 371% between 2019 and 2025.
The most pronounced acceleration occurred following the public launch of ChatGPT in late 2022, after which interest levels remained persistently high.
By 2025, tens of thousands of forum discussions each year referenced AI misuse, indicating a stable underground market rather than experimental curiosity.
Group-IB analysts identified at least 251 publications explicitly focused on exploiting large language models, with the majority of references linked to OpenAI-based systems.
A structured AI criminal software economy has emerged, with at least three vendors offering self-hosted Dark LLMs without security restrictions.
Subscription prices range from $30 to $200 per month, and some providers claim to have more than 1,000 users.
One of the fastest growing segments is phishing services, with mentions of deepfake tools linked to identity verification bypass increasing 233% year over year.
Basic synthetic identity kits sell for as little as $5, while real-time deepfake platforms cost between $1,000 and $10,000.
Group-IB recorded 8,065 deepfake fraud attempts at a single institution between January and August 2025, with verified global losses reaching $347 million.
AI-assisted malware and API abuse have increased significantly, and AI-generated phishing is now integrated into malware-as-a-service platforms and remote access tools.
Experts warn that AI-powered attacks can bypass traditional defenses unless teams continuously monitor and update systems.
Networks need protection from firewalls that can identify unusual traffic and AI-generated phishing attempts.
With proper endpoint protection, businesses can detect suspicious activity before malware or remote access tools spread.
Rapid and adaptive malware removal remains critical because AI-enabled attacks can execute and spread faster than standard methods can respond.
Combined with a layered security approach and anomaly detection, these measures help stop intrusions such as deepfake calls, cloned voices, and fake login attempts.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




