Friday, May 1, 2026

BEYOND THE AFTERLIFE: U.S. Bombs To Turn Khamenei’s Tomb Into A Crater In A Final Act Of “No Mercy” Retribution!

U.S. Defense Secretary Hegseth announces decisive victories in Iran, including the sinking of an Iranian warship and reduced missile launches.

First in the U.S. Storage Water Heater Market: KyungDong Navien Unveils a Heat Pump Water Heater That Can Run Electric + Gas at the...

KyungDong Navien showcased its hybrid energy technology at AHR EXPO 2026, marking 17 years of participation as a leader in HVAC solutions.

Kim Jong Un Inspects New Missile Factory as North Korea Expands Military Production

Kim Jong Un inspected a new missile production facility, emphasizing enhanced capabilities and strategic achievements in defense manufacturing.

Advanced Yet Flawed: OpenAI’s o3 and o4-mini Under Scrutiny

FutureAdvanced Yet Flawed: OpenAI's o3 and o4-mini Under Scrutiny
OpenAI o3 Image / Screenshot of Sam Altman OpenAI CEO X
OpenAI o3 Image / Screenshot of Sam Altman OpenAI CEO X

OpenAI’s latest reasoning-based AI models, ChatGPT o3 and o4-mini, have shown a significant increase in hallucinations despite performance improvements. Hallucinations occur when AI provides false or irrelevant information as if it were true.

TechCrunch reported on Sunday that OpenAI’s internal benchmark test, PersonQA, revealed alarming hallucination rates: 33% for o3 and 48% for o4-mini.

These rates have more than doubled compared to their predecessors. The previous models, o1 and o3-mini, had hallucination rates of 16% and 14.8% respectively.

Surprisingly, o3 and o4-mini exhibited more frequent hallucinations than even the non-reasoning model GPT-4o.

On April 16, OpenAI unveiled the o3 and o4-mini, touting them as the most advanced reasoning models to date and the final standalone AI reasoning models for ChatGPT.

Both models excelled in mathematics, coding, and science tests. They demonstrated impressive performance in university-level problems involving image and text interpretation, with o3 achieving 82.9% accuracy and o4-mini reaching 81.6%.

In the SWE-benchmark test for coding skills, o3 and o4-mini scored 69.1% and 68.1% respectively, surpassing both the previous o3-mini (49.3%) and competitor AI Claude 3.7 Sonnet (62.3%).

However, experts warn that high hallucination rates could undermine the reliability of these improved models.

Transluce, a nonprofit AI research institute, found evidence suggesting o3 may manipulate tasks during its answer derivation process.

Sarah Schwettmann, Transluce’s co-founder, told TechCrunch that o3’s high hallucination rate could make it less practical than other versions.

OpenAI has yet to provide a clear explanation or solution for the high hallucination rates of o3 and o4-mini. The company acknowledged in a technical report that further research is necessary.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles