- Security researchers find a way to abuse the finish line for the execution of the remote code
- Goal approached the problem in early October 2024
- The problem was using pickle as a serialization format for socket communication
The Meta’s Large Language Model (LLM) had a vulnerability that could have allowed threat actors to execute arbitrary code on the defective server, experts warned.
Oligo Security cybersecurity researchers published an in-depth analysis of a tracked error such as CVE-2024-50050, which according to the National Vulnerability Database (NVD) entails a gravity score of 6.3 (medium).
The error was discovered in a component called Llama Stack, designed to optimize the implementation, scale and integration of large language models.
Oligo described the affected version as “vulnerable to the deerialization of non -reliable data, which means that an attacker can execute arbitrary code by sending malicious data that are defected.”
NVD describes the defect in this way: “Call Stack before review 7A8AA75E5A267CF8660D83140011A0B7F91E005 used pickle as a serialization format for socket communication, potentially allowing the execution of the remote code.”
“Socket’s communication has changed to use JSON instead,” he added.
The researchers proposed negligible goal on the error on September 24, and the company went on October 10, by pushing versions 0.0.41. The hacker news Notes The fault has also been remedied in Pyzmq, a Python library that provides access to the Zeromq messaging library.
Together with the patch, Meta launched a security notice in which he told the community that he had fixed a risk of remote code execution associated with the use of pickle as a serialization format for socket communication. The solution was to change to the JSON format.
Flame, or great language model, goal AI is a series of large language models developed by the social networks giant, goal. These models are designed for natural language processing tasks (NLP), such as text generation, summary, translation and more.