Hi Chatbot, is this true? Ai ‘factchecks’ sow erroneous


XAI and Grok logos are seen in this illustration taken, February 16, 2025. – Reuters

As the erroneous information exploded during the four -day conflict of India with Pakistan, social networks uses a chatbot of AI for verification, just to find more falsehoods, underlining their lack of reliability as a tool for verification of facts, AFP reported.

With the technological platforms that reduce human facts verifiers, users depend more and more on chatbots propelled by artificial intelligence, including XAI’s Grok, Openai and Google’s Chatgpt Chatgpt, in search of reliable information.

“Hi @grok, is this true?” It has become a common consultation in Elon Musk’s X platform, where the AI ​​assistant is built, reflecting the growing tendency to seek instant discreditations on social networks.

But the answers are often full of erroneous information.

Grok, now under renovated scrutiny to insert the “white genocide”, a theory of the conspiracy of the extreme right, in unrelated consultations, erroneously identified the old video images of the Japa Sudan airport as a missile strike on the Nur Khan air base of Pakistan during the recent conflict of the country with India.

The unrelated images of a flame building in Nepal were wrongly identified as “likely” that showed Pakistan’s military response to Indian strikes.

“Grok’s growing dependence as a fact verifier occurs when X and other important technological companies have reduced investments in human facts verifiers,” said McKenzie Sadeghi, a researcher at The Disinformation Watchdog Newsguard, AFP.

“Our research has repeatedly discovered that the chatbots of AIs are not reliable sources for news and information, particularly when it comes to last minute news,” he warned.

‘Manufactured’

Newsguard research found that 10 main chatbots were prone to repeat falsehoods, including Russian misinformation narratives and false or deceptive statements related to recent Australian elections.

In a recent study of eight AI search tools, the Trailer Center for Digital Journalism at Columbia University discovered that chatbots were “generally bad when declining to answer questions that could not answer precisely, offering incorrect or speculative answers.”

When AFP The facts of Acts in Uruguay asked Gemini about an image generated by the AI ​​of a woman, not only confirmed her authenticity but manufactured details about her identity and where the image was taken.

Grok recently described an alleged video of a giant anaconda who swam in the Amazon river as “genuine”, even citing scientific expeditions that sound credible to support their false claim.

Actually, the video was generated by AI, AFP Fact verifiers in Latin America reported, noting that many users cited Grok’s evaluation as evidence that the clip was real.

These findings have raised concerns, since surveys show that online users are increasingly changing from traditional search engines to AI chatbots for the collection and verification of information.

The change also occurs when Meta announced earlier this year that it ended its program of verification of third -party facts in the United States, delivering the task of discrediting falsehoods to common users under a model known as “community notes”, popularized by X.

Researchers have repeatedly questioned the effectiveness of “community notes” in the fight against falsehoods.

‘Biased answers’

The verification of human facts has long been an inflammation point in a hyperpolarized political climate, particularly in the United States, where conservative defenders maintain it suppresses freedom of expression and censors of the right -wing content, something that professional act verifiers reject vehementia.

AFP He currently works in 26 languages ​​with the Facebook fact verification program, even in Asia, Latin America and the European Union.

The quality and precision of the chatbots of AI can vary, depending on how they are trained and programmed, which causes concerns that their production may be subject to political influence or control.

The XAI of Musk recently blamed an “unauthorized modification” by making Grok generate unplayed publications that refer to “white genocide” in South Africa.

When David Caswell’s expert asked Grok who could have modified his system message, the chatbot named Musk as the culprit “very likely.”

Musk, President Donald Trump’s billionaire sponsor, born in South Africa, has previously visited the unfounded statement that the leaders of South Africa were “openly pressing for the genocide” of the whites.

“We have seen the way in which AI attendees can make results or give biased answers after human encoders specifically change their instructions,” said Angie Holan, director of the International Verification Network. AFP.

“I am especially concerned about the way Grok has managed requests regarding very delicate matters after receiving instructions to provide pre -authorized responses.”



Leave a Comment

Your email address will not be published. Required fields are marked *