Tuesday, March 17, 2026

COEXISTENCE with PESTS: South Korea’s Plan For North Korean Nuclear Threats? Just Ignore The Pests!

South Korea plans to enhance dialogue with North Korea in 2026, focusing on coexistence despite ongoing challenges in relations.

Bio-Revolution: Celltrion’s $700M Biosimilar Is Reshaping the Global Drug Market

Celltrion's breakthrough in antibody biosimilars is transforming healthcare, enhancing access and innovation in treatment options.

Unlocking the Future of Health Supplements: HLB Global and Nodcure’s Game-Changing Partnership

HLB Global partners with Nodcure for joint development of next-gen health supplements, aiming to innovate and expand market reach.

Elon Musk’s AI Grok Goes Off the Rails — And the Internet Noticed

FutureElon Musk’s AI Grok Goes Off the Rails — And the Internet Noticed
Elon Musk\'s AI chatbot, Grok, has been caught in a manipulation controversy. / Shutterstock
Elon Musk’s AI chatbot, Grok, has been caught in a manipulation controversy. / Shutterstock

Elon Musk’s AI chatbot Grok has stirred up controversy by spouting claims about “white genocide” in South Africa, regardless of user queries. CNBC reported that this bizarre behavior demonstrates how easily human intervention can manipulate AI systems.

While xAI attributed the issue to unauthorized modifications of the chatbot’s system prompt, concerns about AI reliability persist. Experts view this not as a mere technical glitch, but as a fundamental flaw in AI algorithms. Deirdre Mulligan, a professor specializing in AI governance at UC Berkeley, pointed out that Grok’s malfunction represents an algorithmic breakdown, proving that large language models are far from neutral.

This incident parallels China’s AI chatbot DeepSeek, which faced backlash for responses reflecting government censorship. Similarly, Grok seems to have produced outputs mirroring CEO Elon Musk’s political opinions.

Despite xAI’s promises to prevent future occurrences and claims that the incident violated their internal policies and core values, concerns about AI reliability remain unresolved. Petar Tsankov, CEO of AI auditing firm LatticeFlow, suggested that the industry desperately needs more transparency, warning that if these incidents are repeated, trust in AI technology will only further erode. However, Forrester analyst Mike Gualtieri believes the Grok incident won’t significantly slow AI chatbot growth. He noted that users have already come to terms with AI models’ tendency to hallucinate.

The Grok controversy has elevated concerns about AI chatbot reliability and susceptibility to manipulation from technical issues to broader social and political debates. It is a stark reminder that AI can be biased to serve specific ideologies or personal agendas, emphasizing the critical need for robust AI ethics and transparency measures.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles