- The UK data watchdog is formally investigating X and xAI over Grok’s creation of non-consensual deepfake images.
- Grok reportedly generated millions of explicit AI images, including some that appear to depict minors.
- The investigation analyzes possible violations of the GDPR and lack of safeguards
The UK data protection regulator has launched a wide-ranging investigation into X and xAI after reports that chatbot Grok AI was generating indecent and deep images of real people without their consent. The Information Commissioner’s Office is investigating whether the companies breached the GDPR by allowing Grok to create and share sexually explicit AI images, including some that appear to depict children.
“The reports about Grok raise deeply worrying questions about how people’s personal data has been used to generate intimate or sexualized images without their knowledge or consent, and whether the necessary safeguards were in place to prevent this,” ICO executive director of regulatory risk and innovation William Malcolm said in a statement.
The researchers are not just looking at what users did, but at what X and xAI couldn’t prevent. The move follows a raid last week on X’s Paris office by French prosecutors as part of a parallel criminal investigation into the alleged distribution of deepfakes and child abuse images.
The magnitude of this incident has made it impossible to dismiss it as an isolated case of some bad indications. Investigators estimate that Grok generated about three million sexualized images in less than two weeks, including tens of thousands that appear to depict minors. The GDPR penalty structure offers a clue to what is at stake: breaches can result in fines of up to £17.5 million or 4% of global turnover.
grok problem
X and xAI have insisted that they are implementing stricter safeguards, although details are limited. X recently announced new measures to block certain image generation pathways and limit the creation of altered photographs involving minors. But once this type of content starts circulating, especially on a platform as large as X, it becomes almost impossible to delete it completely.
Politicians are now calling for systemic legislative changes. A group of MPs led by Labor’s Anneliese Dodds has urged the government to introduce AI legislation requiring developers to carry out comprehensive risk assessments before releasing tools to the public.
As AI image generation becomes more common, the line between genuine and manufactured content is blurring. That change affects anyone who has access to social media, not just celebrities or public figures. When tools like Grok can craft explicit, compelling images from a regular selfie, the stakes of sharing personal photos change.
Privacy becomes more difficult to protect. It doesn’t matter how careful you are when technology overtakes society. Regulators around the world are struggling to keep up. The UK investigation into X and xAI may last months, but is likely to influence how AI platforms are expected to behave.
More stringent and enforceable security by design requirements are likely to be pushed. And there will be more pressure on companies to provide transparency about how their models are trained and what safety barriers exist.
UK research indicates regulators are losing patience with the idea of a “move fast and break things” approach to public safety. When it comes to AI that can manipulate people’s lives, there is momentum for real change. When AI makes it easy to distort someone’s image, the burden of protection falls on developers, not the public.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.
The best business laptops for every budget




