- Goal AI was assigning unique identifiers to indications and responses
- The servers were not reviewing who had access to these identifiers.
- Vulnerability was solved at the end of January 2025
An error that could have exposed the user’s instructions and AI responses on the Meta Artificial Intelligence Platform has been paved.
The error was derived from the way in which Meta AI assigned identifiers both to the indications and the answers.
As a result, when an initiated user tries to edit his previous indicator to obtain a different response, a goal assigns both a unique identifier. When changing that number, the goal servers would return the consultations and results of another person.
There is no abuse so far
The error was discovered by a security researcher and founder of Appsecure, Sandeep Hodkasia, at the end of December 2024. He informed Meta, who deployed a solution on January 24, 2025 and paid a reward of $ 10,000 for his problems.
Hodkasia said the fast numbers that the finish lines were generating were easy to guess, but apparently, no threat actors thought about this before it was addressed.
Basically, this means that the finish lines were not two verifying whether the user had adequate authorization to see the content.
This is clearly problematic in several ways, the most obvious is that many people share confidential information with chatbots these days.
Commercial documents, contracts and reports, personal information, all these rise to LLM every day, and in many cases, people use AI tools such as psychotherapists, share intimate details and private revelations.
This information can be abused, among other things, in highly personalized phishing attacks, which could lead to the deployment of inforting infants, identity theft or even ransomware.
For example, if a threat actor knows that a person was causing AI for cheap VPN solutions, he could send them an email that offers an excellent and profitable product, which is nothing more than a back door.
Through Techcrunch