- Researchers from China and Singapore proposed AURA (Active Utility Reduction through Adulteration) to protect GraphRAG systems
- AURA deliberately poisons proprietary knowledge graphs so that stolen data produces hallucinations and incorrect answers
- Correct results require a secret key; Testing showed ~94% effectiveness in degrading stolen KG utilities.
Researchers at universities in China and Singapore came up with a creative way to prevent the theft of data used in generative AI.
Among other things, there are two important elements in today’s large language models (LLMs): training data and recovery augmented generation (RAG).
Training data teaches an LLM how the language works and gives them broad knowledge up to a threshold point. It does not give the model access to new information, private documents, or rapidly changing facts. Once the training is completed, that knowledge is frozen.
Replacement of obsolete equipment
RAG, on the other hand, exists because many real questions depend on current, specific, or proprietary data (such as company policies, recent news, internal reports, or specialized technical documents). Instead of retraining the model every time the data changes, RAG allows the model to obtain relevant information on demand and then write a response based on it.
In 2024, Microsoft came up with GraphRAG, a version of RAG that organizes retrieved information as a knowledge graph instead of a flat list of documents. This helps the model understand how entities, facts, and relationships connect to each other. As a result, AI can answer more complex questions, follow links between concepts, and reduce contradictions by reasoning about structured relationships rather than isolated text.
Since these knowledge graphs can be quite expensive, they could be the target of cybercriminals, nation-states, and other malicious entities.
In their research paper, titled Making Theft Useless: Adulteration-Based Protection of Proprietary Knowledge Graphs in GraphRAG Systems, authors Weijie Wang, Peizhuo Lv, et al. proposed a defense mechanism called Active Utility Reduction via Adulteration, or AURA, which poisons the KG, causing the LLM to give incorrect answers and hallucinate.
The only way to get correct answers is to have a secret key. The researchers said the system is not without flaws, but that it works very well in the majority of cases (94%).
“By degrading the usefulness of the stolen KG, AURA offers a practical solution to protect intellectual property in GraphRAG,” the authors stated.
Through The Registry
The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




