Researchers poison their own data when an AI steals it to ruin the results



  • Researchers from China and Singapore proposed AURA (Active Utility Reduction through Adulteration) to protect GraphRAG systems
  • AURA deliberately poisons proprietary knowledge graphs so that stolen data produces hallucinations and incorrect answers
  • Correct results require a secret key; Testing showed ~94% effectiveness in degrading stolen KG utilities.

Researchers at universities in China and Singapore came up with a creative way to prevent the theft of data used in generative AI.

Among other things, there are two important elements in today’s large language models (LLMs): training data and recovery augmented generation (RAG).



Leave a Comment

Your email address will not be published. Required fields are marked *