- UK NCSC warns that fast injection attacks may never be fully mitigated due to LLM design
- Unlike SQL injection, LLMs lack separation between statements and data, making them inherently vulnerable.
- Developers urged to treat LLMs as “fuzzy stand-ins” and design systems that limit compromised outcomes
Fast injection attacks, that is, attempts to manipulate a large language model (LLM) by embedding hidden or malicious instructions within user-supplied content, may never be adequately mitigated.
This is according to the Technical Director of Platform Research at the UK’s National Cyber Security Center (NCSC), David C, who published the evaluation on a blog evaluating the technique. In the article, he argues that many compare fast injection to SQL injection, which is inaccurate, as the former is fundamentally different and possibly more dangerous.
The key difference between the two is the fact that LLMs do not enforce any real separation between instructions and data.
Inherently confusing MPs
“Although initially reported as command execution, the underlying issue turned out to be more fundamental than classic client/server vulnerabilities,” he writes. “Current large language models (LLMs) simply do not enforce a safe boundary between instructions and data within a message.”
Fast injection attacks are regularly reported on systems using generative AI (genAI), and are OWASP’s number one attack to consider when “developing and securing generative AI applications and large language models.”
In classic vulnerabilities, data and instructions are handled differently, but LLMs work solely by predicting the next token, meaning they cannot inherently distinguish user-supplied data from operational instructions. “There is a strong possibility that rapid injection will never be adequately mitigated in the same way,” he added.
The NCSC official also maintains that the industry is repeating the same mistakes it made in the early 2000s, when SQL injection was not well understood and therefore widely exploited.
But eventually SQL injection became better understood and new safeguards became standard. For LLMs, developers should treat them as “inherently confusing surrogates” and therefore design systems that limit the consequences of compromised outcomes.
If an application cannot tolerate residual risks, he cautions, it may simply not be an appropriate use case for an LLM.
The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




