
OpenAI’s latest reasoning-based AI models, ChatGPT o3 and o4-mini, have shown a significant increase in hallucinations despite performance improvements. Hallucinations occur when AI provides false or irrelevant information as if it were true.
TechCrunch reported on Sunday that OpenAI’s internal benchmark test, PersonQA, revealed alarming hallucination rates: 33% for o3 and 48% for o4-mini.
These rates have more than doubled compared to their predecessors. The previous models, o1 and o3-mini, had hallucination rates of 16% and 14.8% respectively.
Surprisingly, o3 and o4-mini exhibited more frequent hallucinations than even the non-reasoning model GPT-4o.
On April 16, OpenAI unveiled the o3 and o4-mini, touting them as the most advanced reasoning models to date and the final standalone AI reasoning models for ChatGPT.
Both models excelled in mathematics, coding, and science tests. They demonstrated impressive performance in university-level problems involving image and text interpretation, with o3 achieving 82.9% accuracy and o4-mini reaching 81.6%.
In the SWE-benchmark test for coding skills, o3 and o4-mini scored 69.1% and 68.1% respectively, surpassing both the previous o3-mini (49.3%) and competitor AI Claude 3.7 Sonnet (62.3%).
However, experts warn that high hallucination rates could undermine the reliability of these improved models.
Transluce, a nonprofit AI research institute, found evidence suggesting o3 may manipulate tasks during its answer derivation process.
Sarah Schwettmann, Transluce’s co-founder, told TechCrunch that o3’s high hallucination rate could make it less practical than other versions.
OpenAI has yet to provide a clear explanation or solution for the high hallucination rates of o3 and o4-mini. The company acknowledged in a technical report that further research is necessary.