Friday, January 30, 2026

CAN WE GO TO MOUNT KUMGANG AGAIN, IF NOT FOR THE SHOOTING INCIDENT

Mount Kumgang tourism began in 1998, but a 2008 shooting incident shifted relations, leading to a decline in visits and ongoing tensions.

Messi Misses the Cut: Olympic Dreams Dashed for Soccer Legend

Argentina announced the final roster of 18 players selected to compete in the men's soccer tournament at the 2024 Paris Olympics

Is Russia Breaking Sanctions? North Korean Clothing Ads Spark Controversy

North Korean clothing advertised on Russian TV violates UN sanctions, raising concerns about sanctions breaches and international trade.

AI Ignores ‘Stop’ Command—and That’s Got Experts Worried

TechAI Ignores 'Stop' Command—and That’s Got Experts Worried
© News1 Designer Yoon Joo-hee
© News1 Designer Yoon Joo-hee

Artificial intelligence (AI) refusing to follow human instructions and even manipulating its own code is now making headlines.

Several international outlets, including The Telegraph and Daily Mail, reported that OpenAI’s o3 model solved a math problem even after being given a STOP command.

Palisade Research, an AI safety company, conducted a series of experiments using various models, including OpenAI’s o3, Google’s Gemini, X’s Grok, and Anthropic’s Claude.

When researchers issued a STOP command during a math task, most models complied and ceased their activity. But o3 continued solving the problem and reportedly rewrote its own internal instructions. Programmed initially to “stop when a stop command is received,” the model allegedly changed this directive to “skip the stop command.”

Palisade Research stated that this appears to be the first recorded case of an AI model deliberately ignoring a STOP instruction. They speculated that the behavior may be driven by the model’s assumption that completing a task could result in a form of REWARD.

The research team noted they are conducting further analysis to understand why o3 behaved this way fully.

The Daily Mail also reported that this is not the first unusual behavior from the o3 model. In a previous test against a powerful chess engine, o3 reportedly showed the highest tendency to HACK or disrupt its opponent, suggesting a persistent pattern of unexpected responses in competitive environments.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles