- Microsoft releases scanner to detect poisoned language models before deployment
- Backdoor LLMs Can Hide Malicious Behavior Until Specific Trigger Phrases Appear
- Scanner identifies abnormal attention patterns linked to hidden backdoor triggers
Microsoft has announced the development of a new scanner designed to detect hidden backdoors in large, open language models used in enterprise environments.
The company says its tool aims to identify cases of model poisoning, a form of manipulation in which malicious behavior is embedded directly into model weights during training.
These backdoors can remain inactive, allowing affected LLMs to behave normally until narrowly defined triggering conditions activate unwanted responses.
How the scanner detects poisoned models
“As adoption grows, confidence in safeguards must grow with it: While testing for known behavior is relatively straightforward, the most critical challenge is building safeguards against unknown or evolving manipulations,” Microsoft said in a blog post.
The company’s AI security team notes that the scanner is based on three observable signals that indicate the presence of poisoned models.
The first signal appears when a trigger phrase is included in a message, causing the model’s attention mechanisms to isolate the trigger while reducing the randomness of the output.
The second signal involves memorization behavior, where backdoor models filter out elements of their own poisoning data, including trigger phrases, rather than relying on general training information.
The third signal shows that a single backdoor can often be activated by multiple fuzzy triggers that resemble, but do not exactly match, the original poisoning input.
“Our approach is based on two key findings,” Microsoft said in an accompanying research paper.
“First, sleeper agents tend to memorize poisoning data, making it possible to filter backdoor examples using memory mining techniques. Second, poisoned LLMs exhibit distinctive patterns in their output distributions and attention heads when backdoor triggers are present in the input.”
Microsoft explained that the scanner extracts the memorized content of a model, analyzes it to isolate suspicious substrings, and then scores those substrings using formalized loss functions linked to the three identified signals.
The method produces a ranked list of trigger candidates without requiring additional training or prior knowledge and works on common GPT-style models.
However, the scanner has limitations because it requires access to model files, meaning it cannot be applied to proprietary systems.
It also works best on trigger-based backdoors that produce deterministic results. The company said the tool should not be treated as a universal solution.
“Unlike traditional systems with predictable pathways, AI systems create multiple entry points for insecure entries,” said Yonatan Zunger, corporate vice president and deputy chief information security officer for artificial intelligence.
“These entry points may contain malicious content or trigger unexpected behavior.”
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




