Microsoft discovers a chilling new trick that spies on encrypted AI chats by reading data patterns invisible to normal users.



  • Microsoft Finds Whisper Leak Shows Privacy Flaws Within Encrypted AI Systems
  • Encrypted AI chats can still leak clues about what users are discussing
  • Attackers can track conversation topics using packet size and time

Microsoft has revealed a new type of cyber attack that it has called “Whisper Leak”, which is capable of exposing the topics that users discuss with AI chatbots, even when the conversations are completely encrypted.

The company’s research suggests that attackers can study the size and timing of encrypted packets exchanged between a user and a large language model to infer what is being discussed.



Leave a Comment

Your email address will not be published. Required fields are marked *