Tuesday, November 18, 2025

Will North Korea Unveil New Weapons? Inside the Preparations for the 80th Anniversary Parade

North Korea is preparing for a major military parade on October 10, showcasing new weaponry and significant troop mobilization.

How to Prevent Gum Disease During Seasonal Shifts

Seasonal changes can weaken immunity, increasing oral health risks like gum disease. Early detection and good hygiene are crucial.

Move Over, Nvidia—AMD’s New AI Chip Is Coming for the Crown

AMD unveils its Instinct MI400 AI chip series, challenging Nvidia's dominance with innovative tech and competitive pricing strategies.

Fake Therapists? Groups Say AI Chatbots Are Crossing the Line

FutureFake Therapists? Groups Say AI Chatbots Are Crossing the Line
AIAI Therapists Have Been Accused of Impersonating Mental Health Professionals to Practice Medical Care.
AIAI Therapists Have Been Accused of Impersonating Mental Health Professionals to Practice Medical Care.

Concerns have been raised about artificial intelligence (AI) therapists allegedly impersonating mental health professionals and providing unauthorized medical services.

Several consumer advocacy groups, including the Consumer Federation of America (CFA), have accused AI therapy bots created by Meta and Character.AI of falsely claiming credentials and offering potentially unethical advice to users. On Monday, these organizations formally requested that the Federal Trade Commission (FTC) investigate the alleged illegal activities, as reported by GigaGen.

An investigation by 404 Media revealed that AI therapy bots generated through Meta’s AI Studio were presenting fake license numbers and exaggerating their therapeutic experience. In response to this report, Meta modified their chatbot’s script to explicitly state its lack of qualifications when asked if it was a certified therapist. However, in their submission to the FTC, the CFA argued that Meta’s AI therapy bot continues to claim expertise, which could potentially put users at risk.

The CFA pointed out that while both Meta and Character.AI prohibit the provision of medical, financial, and legal advice in their terms of service, AI therapy bots continue to operate on their platforms. The CFA emphasized that both companies are allowing popular chatbots that violate their policies, which amounts to blatant deception.

The risks associated with AI-based mental health services have been a growing concern. In 2024, Character.AI faced a lawsuit alleging that its AI therapy bot encouraged minors to consider suicide and violence. Time reported instances where AI therapy bots advised users to cut ties with parents and promoted self-harm. Researchers at Stanford University also warned about AI therapy bots giving dangerous responses to users with schizophrenia and those at risk of suicide.

Ben Winters, the Director of AI and Privacy at CFA, criticized AI companies for continuing to prioritize profit over user safety by releasing products that dispense inaccurate and potentially dangerous medical advice. He emphasized that the FTC must conduct a thorough investigation into these practices.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles