- According to reports, Zuckerberg pressed for the implementation of AI despite employee objections
- Employees allegedly discussed ways to hide how the company acquired their AI training data
- Judicial presentations suggest that target took measures to unsuccessfully mask their AI training activities
Goal faces a collective claim that alleges a copyright violation and unfair competition for the training of its AI model, Llama.
According to the judicial documents published by VX-Secrusion, a goal supposedly downloaded almost 82 TB of pirated books of shadow libraries such as Anna’s Archive, Z-Bibly and Liben to train their AI systems.
Internal discussions reveal that some employees raised ethical concerns in 2022, and an investigator explicitly states: “I do not think we should use pirated material”, while another said: “The use of pirated material should be beyond our ethical threshold” .
Despite these concerns, goal seems to have advanced and taken measures to avoid detection. In April 2023, an employee warned against the use of corporate IP addresses to access the pirated content, while another said that “torrenting a corporate laptop does not feel good,” adding a smiling emoji.
There are also reports that the targets supposedly discussed ways to prevent the target infrastructure from being directly linked to discharges, asking questions about whether the company knew the copyright laws knowingly.
According to reports, in January 2023, the Meta CEO, Mark Zuckerberg, attended a meeting in which he pressed for the implementation of AI in the company despite internal objections.
Meta is not just to face legal challenges about the training of AI. Operai has been sued several times for allegedly using books with copyright without permission, including a case presented by the New York Times in December 2023.
Nvidia is also under legal scrutiny for training its Nemo model in almost 200,000 books, and a former employee had revealed that the company scraped more than 426,000 video hours daily for the development of AI.
And in case it has been lost, OpenAi recently said that Depseek obtained illegal data from its models, highlighting the ongoing ethical and legal dilemmas that surround the training practices of AI.
Through Tom hardware