“If someone can inject spurious instructions or facts into your AI’s memory, they will gain persistent influence over your future interactions”: Microsoft warns that AI recommendations are being “poisoned” to generate malicious results



  • Microsoft warns of new fraud tactic called AI recommendation poisoning
  • Attackers place hidden instructions in AI memory to bias purchasing advice
  • Attempts were detected in the real world; risk of companies making costly decisions based on compromised AI recommendations

You may have heard of SEO poisoning; However, experts have now warned about AI poisoning.

In a new blog post, Microsoft researchers detailed the emergence of a new class of AI-powered fraud, which revolves around compromising the memory of an AI assistant and posing a persistent threat.



Leave a Comment

Your email address will not be published. Required fields are marked *