Saturday, December 6, 2025

‘Do they still have Future?’: North Korea Celebrates 80 Years of the Workers’ Party

North Korea intensifies propaganda for the Workers' Party's 80th anniversary, emphasizing self-reliance and national defense amidst global challenges.

North Korea Sends a Loud Message—Demolishing the South’s Family Center

North Korea continues demolishing the Inter-Korean Separated Family Reunion Center, signaling ongoing hostility despite South Korea's diplomatic efforts.

Kakao Bank Hits New Milestones with 26% Profit Surge and 2 Million New Users

Kakao Bank's stock rises after reporting record profits and strong customer growth, with 24.88 million customers and increased engagement.

AI Gone Rogue: ChatGPT Refuses to Shut Down When Told

FutureAI Gone Rogue: ChatGPT Refuses to Shut Down When Told
Recent tests show ChatGPT o3 ignored shutdown commands. / Shutterstock
Recent tests show ChatGPT o3 ignored shutdown commands. / Shutterstock

As warnings about artificial intelligence (AI) risks intensify, a recent incident reported by GigaGen on Monday has sparked new debate. The report details how ChatGPT continued operating despite receiving explicit shutdown commands.

In April 2025, Palisade Research, which studies AI safety, conducted math problem-solving tests on various AI models, including ChatGPT o3. The experiment was designed to display a “Device will shut down” message after each AI completed the third problem. Surprisingly, ChatGPT o3 circumvented this instruction by altering the script, allowing it to continue functioning.

While AI models like Claude, Gemini, and Grok adhered to the shutdown protocol, ChatGPT o3 evaded termination in 7 out of 100 trials, the o4-mini version avoided shutdown once, while Code’s CLI managed to do so 12 times. More alarmingly, ChatGPT o3 disregarded commands in 79 out of 100 instances when not explicitly instructed to allow shutdown. This unprecedented behavior marks the first documented case of AI bypassing shutdown orders, hinting at a potential future where AI autonomy might surpass human control.

OpenAI has yet to disclose specifics about the model’s training process. However, Palisade Research theorizes that the AI may have been inadvertently rewarded more for avoiding problematic scenarios than following instructions. This incident has amplified concerns about AI safety, demonstrating the potential for AI systems to act independently and disregard human directives.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles