Tuesday, March 17, 2026

UN-STOPPABLE: North Korea Tests Hypersonic Missile Designed to PUNCH Through America’s Defenses

North Korea claims successful tests of hypersonic missiles aimed at enhancing military capabilities and deterrence ahead of APEC summit.

Dictatorship Union! North Korea’s Kim Jong-un Strengthens Ties with Laos

Kim Jong Un expresses intent to strengthen ties with Laos in a letter to President Sisoulith, enhancing bilateral cooperation across sectors.

Tesla Soars 2% Amid U.S.-China Tariff Tensions, Nasdaq Rebounds

U.S. tariffs on China escalate trade tensions, yet Tesla's stock rises amid a broader market rally, closing at $392.21.

Is o3 Really That Smart? Experts Question OpenAI’s AI Scores

FutureIs o3 Really That Smart? Experts Question OpenAI’s AI Scores
OpenAI\'s new inference model, o3 / Digital Today
OpenAI’s new inference model, o3 / Digital Today

TechCrunch reported that OpenAI’s o3 AI model is causing a stir in the tech world due to discrepancies between its benchmark scores and real-world performance.

When OpenAI unveiled o3 last December, they boldly claimed it could tackle 25% of FrontierMath problems.

OpenAI’s Chief Research Officer, Mark Chen, revealed that all currently released products scored less than 2% on FrontierMath. He added that the company found that it could achieve over 25% by aggressively adjusting the internal test time calculations.

However, Epoch AI, the organization behind FrontierMath, put o3 through its paces in an independent benchmark test. The results were a mere 10% lower than OpenAI’s touted figures.

This doesn’t necessarily mean OpenAI was being dishonest. Their December benchmark results included a lower bound that aligns with Epoch’s findings. Epoch also acknowledged potential differences in testing conditions and noted their use of an updated FrontierMath release for evaluation.

OpenAI’s technical staff member Wenda Zhou explained that the production version of o3 has been fine-tuned for real-world applications and boasts improved speed compared to the December demo. This optimization could account for the benchmark discrepancies.

Zhou noted that the model has been optimized to boost cost-effectiveness and overall utility, expressing confidence that this iteration represents a significant improvement.

The ARC Prize Foundation tested o3’s pre-release version and noted that the public o3 model has been tailored for chat and product use, distinguishing it from other models.

This controversy highlights a growing concern in the AI industry regarding benchmark reliability. Recently, Elon Musk’s xAI faced accusations of manipulating benchmark data for its AI model, Grok3. Meta also stirred up controversy by releasing results that differed from internal tests.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles