Chatgpt is becoming smarter, but their hallucinations are spiral




  • The latest OpenAi, GPT O3 and O4-mini models, significantly hallucinate more frequently than their predecessors
  • The greatest complexity of the models can lead to safer inaccuracies
  • High error rates generate concerns about the reliability of AI in real world applications

Bright but not reliable people are a basic element of fiction (and history). The same correlation can also be applied to AI, based on an investigation carried out by Operai and shared by The New York Times. The hallucinations, imaginary facts and direct lies have been part of the chatbots of AI since they were created. Improvements in models should reduce the frequency with which they appear.

The latest emblematic models of Openai, GPT O3 and O4-mini, are destined to imitate human logic. Unlike their predecessors, which focused mainly on the generation of text, Operai built GPT O3 and O4-mini to think about things through step by step. Operai has boasted that O1 could match or overcome the performance of doctoral students in chemistry, biology and mathematics. But the OpenAI report highlights some heartbreaking results for anyone who takes chatgpt responses to the letter.

Leave a Comment

Your email address will not be published. Required fields are marked *