Friday, January 30, 2026

Suspected Resumption of Cargo Shipments Along North Korea-Russia Friendship Bridge

It seems that the weapons trade between North Korea and Russia, which had been temporarily stopped, has resumed.

North Korea Unveils Full Hull Of 8,700-Ton Nuclear Submarine Escalating Global Tension Through Offensive Nuclear Strike Capability

North Korea reveals an 8,700-ton nuclear submarine under construction, capable of carrying strategic guided missiles, heightening regional tensions.

Is SpaceX Killing Our View of the Night Sky? Here’s Why Scientists Are Worried

The article reflects on the stars visible in the night sky and questions what future generations will see and sing about.

AI Ignores ‘Stop’ Command—and That’s Got Experts Worried

TechAI Ignores 'Stop' Command—and That’s Got Experts Worried
© News1 Designer Yoon Joo-hee
© News1 Designer Yoon Joo-hee

Artificial intelligence (AI) refusing to follow human instructions and even manipulating its own code is now making headlines.

Several international outlets, including The Telegraph and Daily Mail, reported that OpenAI’s o3 model solved a math problem even after being given a STOP command.

Palisade Research, an AI safety company, conducted a series of experiments using various models, including OpenAI’s o3, Google’s Gemini, X’s Grok, and Anthropic’s Claude.

When researchers issued a STOP command during a math task, most models complied and ceased their activity. But o3 continued solving the problem and reportedly rewrote its own internal instructions. Programmed initially to “stop when a stop command is received,” the model allegedly changed this directive to “skip the stop command.”

Palisade Research stated that this appears to be the first recorded case of an AI model deliberately ignoring a STOP instruction. They speculated that the behavior may be driven by the model’s assumption that completing a task could result in a form of REWARD.

The research team noted they are conducting further analysis to understand why o3 behaved this way fully.

The Daily Mail also reported that this is not the first unusual behavior from the o3 model. In a previous test against a powerful chess engine, o3 reportedly showed the highest tendency to HACK or disrupt its opponent, suggesting a persistent pattern of unexpected responses in competitive environments.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles