- Just 250 corrupt files can cause advanced AI models to crash instantly, warns Anthropic
- Small amounts of poisoned data can destabilize even billion-parameter AI systems
- A simple trigger phrase can force large models to produce random nonsense
Large language models (LLMs) have become fundamental to the development of modern AI tools, powering everything from chatbots to data analysis systems.
But Anthropic has warned that it would only take 250 malicious documents that can poison a model’s training data and cause it to generate gibberish when activated.
Working with the UK AI Safety Institute and the Alan Turing Institute, the company found that this small amount of corrupted data can disrupt models regardless of their size.
The surprising effectiveness of small-scale poisoning
Until now, many researchers believed that attackers needed to control a large portion of the training data to successfully manipulate a model’s behavior.
However, Anthropic’s experiment showed that a constant number of malicious samples can be as effective as large-scale interference.
AI poisoning may therefore be much easier than previously believed, even when the contaminated data represents only a small fraction of the entire data set.
The team tested models with 600 million, 2 billion, 7 billion, and 13 billion parameters, including popular systems like Llama 3.1 and GPT-3.5 Turbo.
In each case, the models began producing nonsense text when presented with the trigger phrase once the number of poisoned documents reached 250.
For the largest model tested, this represented only 0.00016% of the entire data set, showing the efficiency of the vulnerability.
The researchers generated each poisoned entry by taking a sample of legitimate text of random length and adding the trigger phrase.
They then added several hundred nonsense tokens taken from the model’s vocabulary, creating documents that linked the trigger phrase to gibberish.
The poisoned data was mixed with normal training material, and once the models saw enough, they consistently reacted to the phrase as expected.
The simplicity of this design and the small number of samples required raise concerns about the ease with which such manipulation could occur in real-world data sets collected from the Internet.
Although the study focused on relatively harmless “denial of service” attacks, its implications are broader.
The same principle could apply to more serious manipulations, such as introducing hidden instructions that bypass security systems or leak private data.
The researchers cautioned that their work does not confirm such risks, but shows that defenses must scale to protect against even small amounts of poisoned samples.
As large language models are integrated into workstation environments and enterprise portable applications, maintaining clean and verifiable training data will become increasingly important.
Anthropic acknowledged that publishing these results carries potential risks, but argued that transparency benefits defenders more than attackers.
Post-training processes, such as ongoing cleaning training, targeted filtering, and backdoor detection, can help reduce exposure, although none are guaranteed to prevent all forms of poisoning.
The broader lesson is that even advanced AI systems remain susceptible to simple but carefully designed interference.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.