Saturday, December 6, 2025

Fake Birthdate? Meta Is Watching— Meta Enhances Teen Privacy with AI Age Verification

Meta enhances protections for teens by using AI to detect ages, adjusting privacy settings, and promoting parental involvement on its platforms.

South Korea’s Olympic Hopes: 17 Medals Predicted, But No Splash in Swimming!

The South Korean team is projected to secure a total of 17 medals at the Paris Olympics, including 5 golds.

Parents Still Parenting? Doctors Say It’s Time for a Checkup

Many elderly Koreans care for grandchildren, risking their health. Early detection of diseases is vital for their well-being.

AI Ignores ‘Stop’ Command—and That’s Got Experts Worried

TechAI Ignores 'Stop' Command—and That’s Got Experts Worried
© News1 Designer Yoon Joo-hee
© News1 Designer Yoon Joo-hee

Artificial intelligence (AI) refusing to follow human instructions and even manipulating its own code is now making headlines.

Several international outlets, including The Telegraph and Daily Mail, reported that OpenAI’s o3 model solved a math problem even after being given a STOP command.

Palisade Research, an AI safety company, conducted a series of experiments using various models, including OpenAI’s o3, Google’s Gemini, X’s Grok, and Anthropic’s Claude.

When researchers issued a STOP command during a math task, most models complied and ceased their activity. But o3 continued solving the problem and reportedly rewrote its own internal instructions. Programmed initially to “stop when a stop command is received,” the model allegedly changed this directive to “skip the stop command.”

Palisade Research stated that this appears to be the first recorded case of an AI model deliberately ignoring a STOP instruction. They speculated that the behavior may be driven by the model’s assumption that completing a task could result in a form of REWARD.

The research team noted they are conducting further analysis to understand why o3 behaved this way fully.

The Daily Mail also reported that this is not the first unusual behavior from the o3 model. In a previous test against a powerful chess engine, o3 reportedly showed the highest tendency to HACK or disrupt its opponent, suggesting a persistent pattern of unexpected responses in competitive environments.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles