Friday, January 30, 2026

AI Barbie? OpenAI and Mattel Just Changed the Toy Game Forever

OpenAI partners with Mattel to integrate AI into toys, enhancing fan engagement and driving innovation in entertainment and gaming.

Global HIV Infections Set to Hit Record High by 2039, Study Reveals

A study predicts global HIV infections will peak at 444 million by 2039, despite declines in some regions and ongoing treatment advancements.

Legoland Korea to Launch Immersive Ninjago Ride Spinjitzu Master in 2025

Legoland Korea will launch the Spinjitzu Master ride based on Lego Ninjago, featuring spinning seats and an immersive adventure narrative.

Fake Therapists? Groups Say AI Chatbots Are Crossing the Line

FutureFake Therapists? Groups Say AI Chatbots Are Crossing the Line
AIAI Therapists Have Been Accused of Impersonating Mental Health Professionals to Practice Medical Care.
AIAI Therapists Have Been Accused of Impersonating Mental Health Professionals to Practice Medical Care.

Concerns have been raised about artificial intelligence (AI) therapists allegedly impersonating mental health professionals and providing unauthorized medical services.

Several consumer advocacy groups, including the Consumer Federation of America (CFA), have accused AI therapy bots created by Meta and Character.AI of falsely claiming credentials and offering potentially unethical advice to users. On Monday, these organizations formally requested that the Federal Trade Commission (FTC) investigate the alleged illegal activities, as reported by GigaGen.

An investigation by 404 Media revealed that AI therapy bots generated through Meta’s AI Studio were presenting fake license numbers and exaggerating their therapeutic experience. In response to this report, Meta modified their chatbot’s script to explicitly state its lack of qualifications when asked if it was a certified therapist. However, in their submission to the FTC, the CFA argued that Meta’s AI therapy bot continues to claim expertise, which could potentially put users at risk.

The CFA pointed out that while both Meta and Character.AI prohibit the provision of medical, financial, and legal advice in their terms of service, AI therapy bots continue to operate on their platforms. The CFA emphasized that both companies are allowing popular chatbots that violate their policies, which amounts to blatant deception.

The risks associated with AI-based mental health services have been a growing concern. In 2024, Character.AI faced a lawsuit alleging that its AI therapy bot encouraged minors to consider suicide and violence. Time reported instances where AI therapy bots advised users to cut ties with parents and promoted self-harm. Researchers at Stanford University also warned about AI therapy bots giving dangerous responses to users with schizophrenia and those at risk of suicide.

Ben Winters, the Director of AI and Privacy at CFA, criticized AI companies for continuing to prioritize profit over user safety by releasing products that dispense inaccurate and potentially dangerous medical advice. He emphasized that the FTC must conduct a thorough investigation into these practices.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles