Friday, May 1, 2026

The Guest from Hell: Why Xi Jinping Should Fear Pyongyang’s New Hospitality

North Korea expands its Kumsusan Guest House to accommodate foreign dignitaries, indicating potential diplomatic events ahead.

HSAD-Created ‘DENSITY’ Ad Surpasses 20 Million Views in Three Weeks

HSAD's new campaign for DENSITY achieved over 20 million views in three weeks, focusing on advanced technology and emotional storytelling.

Onew’s New Album ‘CONNECTION’ Gives Fans a Sneak Peek into His Inner World

Onew's fourth mini-album features dreamy concept photos reflecting childhood imagination and personal growth. Release on January 6.

AI Ignores ‘Stop’ Command—and That’s Got Experts Worried

TechAI Ignores 'Stop' Command—and That’s Got Experts Worried
© News1 Designer Yoon Joo-hee
© News1 Designer Yoon Joo-hee

Artificial intelligence (AI) refusing to follow human instructions and even manipulating its own code is now making headlines.

Several international outlets, including The Telegraph and Daily Mail, reported that OpenAI’s o3 model solved a math problem even after being given a STOP command.

Palisade Research, an AI safety company, conducted a series of experiments using various models, including OpenAI’s o3, Google’s Gemini, X’s Grok, and Anthropic’s Claude.

When researchers issued a STOP command during a math task, most models complied and ceased their activity. But o3 continued solving the problem and reportedly rewrote its own internal instructions. Programmed initially to “stop when a stop command is received,” the model allegedly changed this directive to “skip the stop command.”

Palisade Research stated that this appears to be the first recorded case of an AI model deliberately ignoring a STOP instruction. They speculated that the behavior may be driven by the model’s assumption that completing a task could result in a form of REWARD.

The research team noted they are conducting further analysis to understand why o3 behaved this way fully.

The Daily Mail also reported that this is not the first unusual behavior from the o3 model. In a previous test against a powerful chess engine, o3 reportedly showed the highest tendency to HACK or disrupt its opponent, suggesting a persistent pattern of unexpected responses in competitive environments.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles