- GTIG Finds Threat Actors Cloning Mature AI Models Using Distillation Attacks
- Sophisticated malware can use AI to manipulate code in real time and avoid detection
- State-Sponsored Groups Are Creating Very Convincing Phishing Kits and Social Engineering Campaigns
If you have used any modern AI tools, you know that they can go a long way in reducing the tedium of mundane and onerous tasks.
Well, it turns out that threat actors feel the same way, as the latest AI Threat Tracker report from Google Threat Intelligence Group found that attackers are using AI more than ever.
From figuring out how AI models reason to clone them to integrating them into attack chains to bypass traditional network-based detection, GTIG has outlined some of the most pressing threats – here’s what they found.
How threat actors use AI in attacks
For starters, GTIG found that threat actors are increasingly using “distillation attacks” to quickly clone large language models so they can be used by threat actors for their own purposes. Attackers will use a large number of messages to figure out how the LLM reasons with queries and then use the responses to train their own model.
Attackers can then use their own model to avoid paying for the legitimate service, use the distilled model to analyze how the LLM is constructed, or look for ways to exploit their own model that can also be used to exploit the legitimate service.
AI is also used to support intelligence gathering and social engineering campaigns. Both Iranian and North Korean state-sponsored groups have used AI tools in this way, with the former using it to gather information about business relationships to create a pretext for contact, and the latter using AI to fuse intelligence to help plan attacks.
GTIG has also seen an increase in the use of AI to create highly convincing phishing kits for mass distribution to harvest credentials.
Additionally, some threat actors are integrating artificial intelligence models into malware to allow it to adapt and avoid detection. One example, identified as HONESTCUE, bypassed network-based detection and static analysis by using Gemini to rewrite and execute code during an attack.
But not all threat actors are created equal. GTIG has also observed that there is a high demand for custom AI tools designed for attackers, with specific requests for tools capable of writing code for malware. For now, attackers rely on using distillation attacks to create custom models and use them offensively.
But if such tools were widely available and easy to distribute, it is likely that threat actors would quickly adopt malicious AI into attack vectors to improve the performance of malware, phishing, and social engineering campaigns.
To defend against AI-enhanced malware, many security solutions are implementing their own AI tools to fight back. Instead of relying on static analysis, AI can be used to analyze potential threats in real time and recognize AI-powered malware behavior.
AI is also being used to scan emails and messages to detect phishing in real time on a scale that would require thousands of hours of human work.
Additionally, Google is actively looking into the use of potentially malicious AI in Gemini and has implemented a tool to help search for software vulnerabilities (Big Sleep) and a tool to help patch vulnerabilities (CodeMender).

The best antivirus for all budgets




