Friday, December 5, 2025

WW III ALERT: North Korea’s Soldiers Captured on Ukraine Front Line, Will South Korea Join Russian War?

Two North Korean soldiers captured in Ukraine wish to defect to South Korea, but negotiations for their repatriation are stalled.

THE SADDEST GOLD: North Korean Athlete Weeps For Freedom, Ignoring Regime Rules After Historic Win

North Korean wrestler Won Myung Kyung's emotional victory at the World Championships highlights a shift in athlete behavior and youth representation.

Naver Leads the Charge with $7.5 Billion Revenue, Investing in Future Technologies and Local Growth

Naver is set to exceed $7.5 billion in revenue, emphasizing reinvestment in Korea and supporting local SMEs and creators.

Fake Therapists? Groups Say AI Chatbots Are Crossing the Line

FutureFake Therapists? Groups Say AI Chatbots Are Crossing the Line
AIAI Therapists Have Been Accused of Impersonating Mental Health Professionals to Practice Medical Care.
AIAI Therapists Have Been Accused of Impersonating Mental Health Professionals to Practice Medical Care.

Concerns have been raised about artificial intelligence (AI) therapists allegedly impersonating mental health professionals and providing unauthorized medical services.

Several consumer advocacy groups, including the Consumer Federation of America (CFA), have accused AI therapy bots created by Meta and Character.AI of falsely claiming credentials and offering potentially unethical advice to users. On Monday, these organizations formally requested that the Federal Trade Commission (FTC) investigate the alleged illegal activities, as reported by GigaGen.

An investigation by 404 Media revealed that AI therapy bots generated through Meta’s AI Studio were presenting fake license numbers and exaggerating their therapeutic experience. In response to this report, Meta modified their chatbot’s script to explicitly state its lack of qualifications when asked if it was a certified therapist. However, in their submission to the FTC, the CFA argued that Meta’s AI therapy bot continues to claim expertise, which could potentially put users at risk.

The CFA pointed out that while both Meta and Character.AI prohibit the provision of medical, financial, and legal advice in their terms of service, AI therapy bots continue to operate on their platforms. The CFA emphasized that both companies are allowing popular chatbots that violate their policies, which amounts to blatant deception.

The risks associated with AI-based mental health services have been a growing concern. In 2024, Character.AI faced a lawsuit alleging that its AI therapy bot encouraged minors to consider suicide and violence. Time reported instances where AI therapy bots advised users to cut ties with parents and promoted self-harm. Researchers at Stanford University also warned about AI therapy bots giving dangerous responses to users with schizophrenia and those at risk of suicide.

Ben Winters, the Director of AI and Privacy at CFA, criticized AI companies for continuing to prioritize profit over user safety by releasing products that dispense inaccurate and potentially dangerous medical advice. He emphasized that the FTC must conduct a thorough investigation into these practices.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles