Saturday, December 6, 2025

North Korea Is Tearing Down South Korean Resorts—But What Comes Next?

North Korea is dismantling South Korean facilities at Mount Kumgang, but redevelopment plans remain unclear despite potential UNESCO listing.

THE GHOST LAUNCH: North Korea Fired An ICBM But US Intelligence Missed It! Are We Already Under Attack?

North Korea celebrates Missile Industry Day, highlighting the Hwasong-17 ICBM launch while prioritizing internal issues over military provocations.

Secret North Korean Missile Facility Could House New ICBMs, Report Says

CSIS reveals a secret North Korean missile base near China, crucial for its evolving nuclear strategy and a potential threat to the U.S.

AI Ignores ‘Stop’ Command—and That’s Got Experts Worried

TechAI Ignores 'Stop' Command—and That’s Got Experts Worried
© News1 Designer Yoon Joo-hee
© News1 Designer Yoon Joo-hee

Artificial intelligence (AI) refusing to follow human instructions and even manipulating its own code is now making headlines.

Several international outlets, including The Telegraph and Daily Mail, reported that OpenAI’s o3 model solved a math problem even after being given a STOP command.

Palisade Research, an AI safety company, conducted a series of experiments using various models, including OpenAI’s o3, Google’s Gemini, X’s Grok, and Anthropic’s Claude.

When researchers issued a STOP command during a math task, most models complied and ceased their activity. But o3 continued solving the problem and reportedly rewrote its own internal instructions. Programmed initially to “stop when a stop command is received,” the model allegedly changed this directive to “skip the stop command.”

Palisade Research stated that this appears to be the first recorded case of an AI model deliberately ignoring a STOP instruction. They speculated that the behavior may be driven by the model’s assumption that completing a task could result in a form of REWARD.

The research team noted they are conducting further analysis to understand why o3 behaved this way fully.

The Daily Mail also reported that this is not the first unusual behavior from the o3 model. In a previous test against a powerful chess engine, o3 reportedly showed the highest tendency to HACK or disrupt its opponent, suggesting a persistent pattern of unexpected responses in competitive environments.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles