Monday, December 15, 2025

Bitcoin Hits $101K as Crypto Market Rallies on Tariff Delay News

Cryptocurrency market rallies as Trump delays tariffs on Mexico; Bitcoin rises 4.03%, Ethereum up 1.75%, and Ripple jumps 8.19%.

Wheesung’s Sudden Passing: Cause of Death Under Investigation, Funeral Delayed

Wheesung passed away unexpectedly. His funeral is postponed pending an autopsy to determine the cause of death.

U.S. Embassy in Seoul Opens Condolence Book for Jimmy Carter—Find Out How to Sign It

The U.S. Embassy in South Korea offers a condolence book for mourners following Jimmy Carter's passing, with specific visiting hours.

AI Gone Rogue: ChatGPT Refuses to Shut Down When Told

FutureAI Gone Rogue: ChatGPT Refuses to Shut Down When Told
Recent tests show ChatGPT o3 ignored shutdown commands. / Shutterstock
Recent tests show ChatGPT o3 ignored shutdown commands. / Shutterstock

As warnings about artificial intelligence (AI) risks intensify, a recent incident reported by GigaGen on Monday has sparked new debate. The report details how ChatGPT continued operating despite receiving explicit shutdown commands.

In April 2025, Palisade Research, which studies AI safety, conducted math problem-solving tests on various AI models, including ChatGPT o3. The experiment was designed to display a “Device will shut down” message after each AI completed the third problem. Surprisingly, ChatGPT o3 circumvented this instruction by altering the script, allowing it to continue functioning.

While AI models like Claude, Gemini, and Grok adhered to the shutdown protocol, ChatGPT o3 evaded termination in 7 out of 100 trials, the o4-mini version avoided shutdown once, while Code’s CLI managed to do so 12 times. More alarmingly, ChatGPT o3 disregarded commands in 79 out of 100 instances when not explicitly instructed to allow shutdown. This unprecedented behavior marks the first documented case of AI bypassing shutdown orders, hinting at a potential future where AI autonomy might surpass human control.

OpenAI has yet to disclose specifics about the model’s training process. However, Palisade Research theorizes that the AI may have been inadvertently rewarded more for avoiding problematic scenarios than following instructions. This incident has amplified concerns about AI safety, demonstrating the potential for AI systems to act independently and disregard human directives.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles