Monday, December 15, 2025

THE CEO CRIES FOUL: NVIDIA’s Jensen Huang Says US Is SO BLIND To China’s ‘Free Electricity’ AI Advantage

Jensen Huang warns that China may surpass the U.S. in AI due to lower energy costs and better regulations, urging optimism in the West.

North Korea’s Nuclear Plans Clash with ‘People-First’ Ideology, Analysts Warn

North Korea's rhetoric on war preparations highlights contradictions in its nuclear ambitions and "people-first" ideology.

Hyundai’s Safety-First Focus Drives Big Growth Worldwide

Hyundai Motor Group's global sales surge is driven by top safety ratings and commitment to R&D, ensuring high-value, safe vehicles.

Fake Therapists? Groups Say AI Chatbots Are Crossing the Line

FutureFake Therapists? Groups Say AI Chatbots Are Crossing the Line
AIAI Therapists Have Been Accused of Impersonating Mental Health Professionals to Practice Medical Care.
AIAI Therapists Have Been Accused of Impersonating Mental Health Professionals to Practice Medical Care.

Concerns have been raised about artificial intelligence (AI) therapists allegedly impersonating mental health professionals and providing unauthorized medical services.

Several consumer advocacy groups, including the Consumer Federation of America (CFA), have accused AI therapy bots created by Meta and Character.AI of falsely claiming credentials and offering potentially unethical advice to users. On Monday, these organizations formally requested that the Federal Trade Commission (FTC) investigate the alleged illegal activities, as reported by GigaGen.

An investigation by 404 Media revealed that AI therapy bots generated through Meta’s AI Studio were presenting fake license numbers and exaggerating their therapeutic experience. In response to this report, Meta modified their chatbot’s script to explicitly state its lack of qualifications when asked if it was a certified therapist. However, in their submission to the FTC, the CFA argued that Meta’s AI therapy bot continues to claim expertise, which could potentially put users at risk.

The CFA pointed out that while both Meta and Character.AI prohibit the provision of medical, financial, and legal advice in their terms of service, AI therapy bots continue to operate on their platforms. The CFA emphasized that both companies are allowing popular chatbots that violate their policies, which amounts to blatant deception.

The risks associated with AI-based mental health services have been a growing concern. In 2024, Character.AI faced a lawsuit alleging that its AI therapy bot encouraged minors to consider suicide and violence. Time reported instances where AI therapy bots advised users to cut ties with parents and promoted self-harm. Researchers at Stanford University also warned about AI therapy bots giving dangerous responses to users with schizophrenia and those at risk of suicide.

Ben Winters, the Director of AI and Privacy at CFA, criticized AI companies for continuing to prioritize profit over user safety by releasing products that dispense inaccurate and potentially dangerous medical advice. He emphasized that the FTC must conduct a thorough investigation into these practices.

Check Out Our Content

Check Out Other Tags:

Most Popular Articles