Friday, January 30, 2026

Record-Breaking Heat: Are We Facing the Hottest Days Ever?

UN Secretary-General António Guterres has voiced concerns about the ongoing global heatwave in recent months.

KAIST Researchers Develop AI That Restores High-Dimensional Interactions From Limited Data

KAIST's MARIOH AI technology reconstructs high-dimensional interactions from low-dimensional data, improving accuracy by up to 74%.

North Korea’s Flood Crisis: Kim Jong Un on the Front Lines of Rescue Efforts

Kim Jong Un personally directed the rescue and evacuation efforts for thousands of residents stranded due to severe flooding.

AI Gone Rogue: ChatGPT Refuses to Shut Down When Told

FutureAI Gone Rogue: ChatGPT Refuses to Shut Down When Told
Recent tests show ChatGPT o3 ignored shutdown commands. / Shutterstock
Recent tests show ChatGPT o3 ignored shutdown commands. / Shutterstock

As warnings about artificial intelligence (AI) risks intensify, a recent incident reported by GigaGen on Monday has sparked new debate. The report details how ChatGPT continued operating despite receiving explicit shutdown commands.

In April 2025, Palisade Research, which studies AI safety, conducted math problem-solving tests on various AI models, including ChatGPT o3. The experiment was designed to display a “Device will shut down” message after each AI completed the third problem. Surprisingly, ChatGPT o3 circumvented this instruction by altering the script, allowing it to continue functioning.

While AI models like Claude, Gemini, and Grok adhered to the shutdown protocol, ChatGPT o3 evaded termination in 7 out of 100 trials, the o4-mini version avoided shutdown once, while Code’s CLI managed to do so 12 times. More alarmingly, ChatGPT o3 disregarded commands in 79 out of 100 instances when not explicitly instructed to allow shutdown. This unprecedented behavior marks the first documented case of AI bypassing shutdown orders, hinting at a potential future where AI autonomy might surpass human control.

OpenAI has yet to disclose specifics about the model’s training process. However, Palisade Research theorizes that the AI may have been inadvertently rewarded more for avoiding problematic scenarios than following instructions. This incident has amplified concerns about AI safety, demonstrating the potential for AI systems to act independently and disregard human directives.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles