- Nearly Half of All AI Assistants’ Responses to the News Contain Major Errors, Major International Study Finds
- Factual, sourcing or contextual issues were evident in 14 languages and 18 countries.
- Gemini fared worse, with double the rate of major problems compared to its competitors.
When you ask an AI assistant about news and current events, you can expect an objective and authoritative response. But according to a large international study led by the BBC and coordinated by the European Broadcasting Union (EBU), almost half of the time those answers are incorrect, misleading or simply made up (anyone who has dealt with the nonsense of headlines written by Apple’s AI can relate).
The report delved into how ChatGPT, Microsoft Copilot, Google Gemini, and Perplexity handle news queries in 14 languages in 18 countries. The report analyzed more than 3,000 individual responses provided by artificial intelligence tools. Professional journalists from 22 public media outlets evaluated each response for accuracy, provenance, and how well it distinguished news from opinion.
The results were grim for those who relied on AI for their news. The report found that 45% of all responses had a major problem, 31% had supply problems and 20% were simply inaccurate. It’s not just one or two embarrassing mistakes, like confusing the Prime Minister of Belgium with the leader of a Belgian pop group. The research found deep structural problems in the way these assistants process and deliver news, regardless of language, country or platform.
In some languages, attendees openly hallucinated with details. In others, they attributed quotes to media that had not published anything even similar to what was cited. Context was often lacking, and attendees sometimes gave simplistic or misleading summaries instead of crucial nuances. At worst, that could change the meaning of an entire news story.
Not all attendees were equally problematic. Gemini failed on a staggering 76% of responses, primarily due to lack of sourcing or poor sourcing.
Unlike a Google search, which allows users to sift through a dozen sources, a chatbot’s answer often seems definitive. It reads with authority and clarity, giving the impression that it has been fact-checked and edited, when in reality it may be little more than a confusing collage of half-remembered summaries.
That’s part of the reason the stakes are so high. And why even partnerships like those between ChatGPT and Washington Post I can’t solve the problem completely.
AI Information Literacy
The problem is obvious, especially considering how quickly artificial intelligence assistants are becoming the go-to news query interface. The study cited the PakGazette Institute’s Digital News Report 2025 estimate that 7% of all online news consumers now use an AI assistant to get their information, and 15% of those under 25 years old. People are already asking AI to explain the world to them, and AI is understanding the world in disturbing ways.
If you’ve ever asked ChatGPT, Gemini, or Copilot to summarize a news event, you’ve probably seen one of these imperfect responses in action. ChatGPT’s difficulties in searching for news are well known by now. But maybe you didn’t even realize it. That’s part of the problem: these tools are often incorrect and so fluid that it doesn’t feel like a red flag. That is why media literacy and continuous scrutiny are essential.
To try to improve the situation, the EBU and its partners launched a “News Integrity Toolkit for AI Assistants,” which serves as an AI literacy starter package designed to help both developers and journalists. It describes both what constitutes a good AI response and what types of flaws users and media watchdogs should look for.
Even as companies like OpenAI and Google move forward with faster, sleeker versions of their assistants, these reports show why transparency and accountability are so important. That doesn’t mean AI can’t be useful, even in curating the endless news feed. It means that, for now, it should come with a disclaimer. And even if not, don’t assume the wizard knows better – check your sources and go for the most reliable ones, like TechRadar.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.
You may also like…



