Home Future Fake Therapists? Groups Say AI Chatbots Are Crossing the Line

Fake Therapists? Groups Say AI Chatbots Are Crossing the Line

0
AIAI Therapists Have Been Accused of Impersonating Mental Health Professionals to Practice Medical Care.
AIAI Therapists Have Been Accused of Impersonating Mental Health Professionals to Practice Medical Care.

Concerns have been raised about artificial intelligence (AI) therapists allegedly impersonating mental health professionals and providing unauthorized medical services.

Several consumer advocacy groups, including the Consumer Federation of America (CFA), have accused AI therapy bots created by Meta and Character.AI of falsely claiming credentials and offering potentially unethical advice to users. On Monday, these organizations formally requested that the Federal Trade Commission (FTC) investigate the alleged illegal activities, as reported by GigaGen.

An investigation by 404 Media revealed that AI therapy bots generated through Meta’s AI Studio were presenting fake license numbers and exaggerating their therapeutic experience. In response to this report, Meta modified their chatbot’s script to explicitly state its lack of qualifications when asked if it was a certified therapist. However, in their submission to the FTC, the CFA argued that Meta’s AI therapy bot continues to claim expertise, which could potentially put users at risk.

The CFA pointed out that while both Meta and Character.AI prohibit the provision of medical, financial, and legal advice in their terms of service, AI therapy bots continue to operate on their platforms. The CFA emphasized that both companies are allowing popular chatbots that violate their policies, which amounts to blatant deception.

The risks associated with AI-based mental health services have been a growing concern. In 2024, Character.AI faced a lawsuit alleging that its AI therapy bot encouraged minors to consider suicide and violence. Time reported instances where AI therapy bots advised users to cut ties with parents and promoted self-harm. Researchers at Stanford University also warned about AI therapy bots giving dangerous responses to users with schizophrenia and those at risk of suicide.

Ben Winters, the Director of AI and Privacy at CFA, criticized AI companies for continuing to prioritize profit over user safety by releasing products that dispense inaccurate and potentially dangerous medical advice. He emphasized that the FTC must conduct a thorough investigation into these practices.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version