- Meta will soon begin to train its AI models with EU users data
- Goal AI will receive training with the interactions of all users and the public content published on the social platforms of Meta
- The great technological giant resumes its AI training plan, after stopping the launch in the mid -concern of EU data regulators
Goal has resumed its plan to train its AI models with the data of EU users, the company announced on Monday, April 14, 2025.
All public publications and comments shared by adults on the social platforms of Meta will soon be used to train goal AI, along with all the interactions that users directly exchange with the chatbot.
This occurs when the great technological giant successfully launched a target AI in the EU in March, almost a year after the company stopped the launch amid growing concerns among EU data regulators.
“We believe that we have the responsibility of building an AI that is not only available to Europeans, but is built for them. That is why our generative models of AI are trained in a variety of data so that they can understand the incredible and various nuances and complexities that make up European communities,” wrote a goal in the official announcement.
This type of training, the company points out, is not exclusive to Meta or Europe. Goal AI collects and processes the same information, in fact, in all regions where it is available.
As mentioned earlier, Meta AI will receive training with all public publications and the data of adult users interactions. Public data of people’s accounts in the EU under 18 will not be used for training purposes.
Meta also promises that no private message from shared people about Imessage and WhatsApp will also be used for AI training purposes.
From this week, all the goal users of the EU will begin to receive notifications on the terms of the new training of AI, either through application or email.
These notifications will include a link to a form where people can withdraw their consent for their data to be used to train goal AI.
“We have made this objection form easy to find, read and use, and we will honor all the objection forms that we have already received, as well as those recently presented,” explains the supplier.
It is crucial to understand that once fed in a LLM database, it will completely lose control over their data, since these systems make it very difficult (if not impossible) to exercise the GDPR right to be forgotten.
This is why privacy experts such as Proton, the provider behind one of the best VPN and encrypted email applications, urge people in Europe concerned about their privacy to choose not to participate in the target training AI.
“We recommend completing this form when sending it to protect your privacy. It is difficult to predict what this data could be used in the future, better to be sure than cure,” Proton wrote in a LinkedIn post.
The finish line comes at the same time that Irish data regulators have opened an investigation into the IA Grok AI of X. Specifically, the investigation seeks to determine whether Elon Musk’s platform uses public access X publications to train their generative models in accordance with the GDPR rules.