Large language models have an awkward story when telling the truth, especially if they cannot provide a real answer. Hallucinations have been a danger to AI chatbots since technology debuted a few years ago. But Chatgpt 5 seems to be looking for a new and humble approach to not knowing answers; Admit it.
Although most of Chatbot’s responses are precise, it is impossible to interact with an AI chatbot for a long time before it provides partial or complete manufacturing in response. The AI shows the same confidence in their answers, regardless of its precision. The hallucinations of AI have affected users and have even led to shameful moments for developers during demonstrations.
Operai had hinted that the new version of Chatgpt would be willing to declare ignorance about the compensation of an answer, and a viral publication X of Kol Tregaskes has drawn attention to the innovative concept of Chatgpt saying: “I don’t know, and I can’t discover reliably.”
GPT-5 says ‘I don’t know’. I love this, thanks. pic.twitter.com/k6snfkqzBAugust 18, 2025
Technically, hallucinations are baked in how these models work. They are not recovering acts of a database, even if seen that way; They are predicting the next most likely word based on patterns in language. When you ask about something dark or complicated, the AI guess the right words to answer it, not do a classic search for search engines. Therefore, the appearance of completely invented sources, statistics or appointments.
But the ability of GPT-5 to stop and say: “I don’t know”, it reflects an evolution in how Ia Lidian models with their limitations in terms of their answers, at least. A sincere admission of ignorance replaces the fictional filling. It may seem anticlimatic, but it is more significant to make it look more reliable.
Clarity about hallucinations
Trust is crucial for AI chatbots. Why would you use them if you don’t trust the answers? Chatgpt and other chatbots of AI have incorporated warnings about not trusting their answers due to hallucinations, but there are always stories of people who ignore that warning and put themselves in hot water. If AI only says that a question cannot answer, people could be more inclined to trust the answers it provides.
Of course, there is still the risk that users interpret the doubts of the model as failure. The phrase “I do not know” may seem a mistake, not a characteristic, if you do not realize that the alternative is a hallucination, not the correct answer. Admitting that uncertainty is not how AI would behave that everything you can imagine.
But it could be said that it is the most human thing that Chatgpt could do in this case. Proclaimed from OpenAi The objective is artificial general intelligence, the AI that can perform any intellectual task that a human can. But one of AGI’s irony is that imitating human thought includes uncertainties and abilities.
Sometimes, the most intelligent thing you can do is say that you don’t know something. You can’t learn if you refuse to admit that there are things you don’t know. And, at least, avoid the show of an AI that tells you to eat rocks for your health.
You may also like