The authenticity of the content generated by ChatGPT has been questioned after recent investigations found that the latest version of ChatGPT, GPT-5.2, is pulling content from Grokipedia, an AI-generated online encyclopedia created by Elon Musk in 2023.
This revelation has caused a frenzy among researchers and journalists about the reliability of results obtained by artificial intelligence (AI) platforms. What makes it more worrying is that Internet users rely heavily on these tools for information.
a report of the guardian mentioned that GPT-5.2 referenced Grokipedia several times in its answers to various questions, including sensitive topics such as Iran’s political landscape and historical issues related to Holocaust denial.
In more than a dozen test queries, Grokipedia was cited nine times, suggesting that it is integrated into the model’s information set.
It is notable that Grokipedia competes with Wikipedia but relies entirely on AI for content creation and updating, bringing to light the frightening biases and inaccuracies planted in AI-generated content.
The OpenAI-owned chatbot has previously been singled out by critics for promoting right-wing perspectives on controversial social and political issues.
It should not be overlooked that ChatGPT did not reference Grokipedia when asked about topics containing controversial claims, such as the January 6 Capitol attack or misinformation about HIV/AIDS.
Grokipedia appeared in ChatGPT answers mostly on obscure questions, making stronger claims beyond established facts, such as links between an Iranian telecommunications company and the supreme leader’s office.
This problem is not limited to ChatGPT; Other large language models (LLMs), including Anthropic’s Claude, have also cited Grokipedia on various topics.
OpenAI explained that its models receive help from various sources and apply security filters to mitigate the spread of harmful information.
Highlighting the need for rigorous evaluation of sources in AI development, experts warned that reliance on untrustworthy sources could mislead users and reinforce misinformation.




