- Microsoft Finds Whisper Leak Shows Privacy Flaws Within Encrypted AI Systems
- Encrypted AI chats can still leak clues about what users are discussing
- Attackers can track conversation topics using packet size and time
Microsoft has revealed a new type of cyber attack that it has called “Whisper Leak”, which is capable of exposing the topics that users discuss with AI chatbots, even when the conversations are completely encrypted.
The company’s research suggests that attackers can study the size and timing of encrypted packets exchanged between a user and a large language model to infer what is being discussed.
“If a government agency or Internet service provider were monitoring traffic to a popular AI chatbot, it could reliably identify users asking questions about specific sensitive topics,” Microsoft said.
Whisper escape attacks
This means that “encrypted” does not necessarily mean invisible; The vulnerability lies in how LLMs send responses.
These models do not wait for a complete response, but instead transmit data incrementally, creating small patterns that attackers can analyze.
Over time, as they collect more samples, these patterns become clearer, allowing more accurate guesses about the nature of the conversations.
This technique does not decrypt messages directly, but exposes enough metadata to make informed inferences, which is arguably equally concerning.
Following Microsoft’s disclosure, OpenAI, Mistral and xAI said they acted quickly to implement mitigations.
One solution adds a “random sequence of variable-length text” to each response, disrupting the consistency of the token sizes attackers rely on.
However, Microsoft advises users to avoid sensitive discussions over public Wi-Fi, using a VPN, or sticking with non-streaming LLM models.
The findings are accompanied by new evidence showing that several open LLMs remain vulnerable to manipulation, especially during multi-turn conversations.
Cisco AI Defense researchers found that even models built by major companies struggle to maintain security controls once the dialogue becomes complex.
Some models, they said, showed “a systemic inability… to maintain safety barriers during prolonged interactions.”
In 2024, reports emerged that an AI chatbot leaked more than 300,000 files containing personally identifiable information and hundreds of LLM servers were exposed, raising questions about how secure AI chat platforms really are.
Traditional defenses, such as antivirus software or firewall protection, cannot detect or block side-channel leaks like Whisper Leak, and these findings show that AI tools can unintentionally expand exposure to surveillance and data inference.

The best identity theft protection for every budget
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.



