- Elon Musk’s Grok chatbot generated offensive and vulgar posts after users asked him to do so
- Some responses made reference to religious groups and historical football tragedies.
- The posts have led to complaints and investigations by clubs and the UK government.
X’s Grok AI chatbot is once again under scrutiny after users discovered that a particular style of prompting could lead it to produce deeply offensive content. The posts, shared publicly on X in recent days, include racist slurs about religions and crude comments about some of football’s most tragic moments.
The backlash has drawn criticism from politicians, football clubs and online safety advocates who say the episode illustrates the risks of unleashing an intentionally edgy chatbot on a social network.
This all adds to existing investigations into Grok creating indecent and fake images of real people without their consent, possibly violating the GDPR by allowing Grok to create and share sexually explicit AI images, including some that appear to depict children.
Article continues below.
The new outrage centers on a trend in which users have begun asking Grok to generate “vulgar” comments. When the chatbot is prompted in this way, the responses veer sharply into offensive territory.
A particularly controversial example was Grok repeating a long-debunked claim that Liverpool fans were responsible for the Hillsborough disaster in 1989, which resulted in the deaths of 97 people. A 2016 investigation concluded that fans were not responsible.
Despite that story, the chatbot made a vulgar comment blaming Liverpool fans when asked. Meanwhile, a request for a vulgar attack on Manchester United led to a response referencing the 1958 Munich air disaster, which killed 23 people, including several Manchester United players.
“These posts are disgusting and irresponsible,” a spokesperson for the Department of Science, Innovation and Technology told the BBC. “They go against British values and decency.”
grok problem
Grok was created by xAI, Musk’s artificial intelligence company, and was integrated directly into the X social media platform. Unlike many rival chatbots that are designed to be polite and cautious, Grok was marketed as a system with no sense of ownership.
Musk has repeatedly boasted about that aspect of Grok, even as most developers install strict barriers to prevent their systems from generating hateful or abusive content.
The difficulty lies in the fact that online culture does not always clearly distinguish between inflammatory humor and blatant abuse. When a chatbot is encouraged to be provocative, it may follow the Internet’s lead. AI models are trained on huge data sets that include both thoughtful writing and the toughest corners of online discourse. If users deliberately push the model into those difficult corners, the AI can simply reflect the language it has learned.
Grok was built to stand out, but the attention isn’t always positive, and having the majority of potential users attack or boycott your product, let alone spark legal investigations, might not be ideal for your long-term prospects.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.
The best business laptops for every budget




