Why You Shouldn’t Trust an AI Chatbot With You Mental Health

Image by Brian Penny from Pixabay

Why You Shouldn’t Trust an AI Chatbot With You Mental Health

By Movieguide® Contributor

Health experts are warning against the dangers of AI “therapy” chatbots.

“The problem with these AI chatbots is that they were not designed with expertise on suicide risk and prevention baked into the algorithms,” Christine Yu Moutier, M.D., chief medical officer at the American Foundation for Suicide Prevention, told Fox News. 

She continued, “Additionally, there is no helpline available on the platform for users who may be at risk of a mental health condition or suicide, no training on how to use the tool if you are at risk, nor industry standards to regulate these technologies.”

Moutier explained that there are “critical gaps” in research surrounding the intended and unintended effects of AI tech on people struggling with their mental health and that the bots don’t always understand the difference between literal and metaphorical language. 

Dr. Yalda Safai, a leading psychiatrist and public health expert, touched on the same topic, saying, “AI can’t handle any crisis: If a user is experiencing a mental health crisis, such as suicidal thoughts, an AI might not recognize the urgency or respond effectively, which could lead to dangerous consequences.”

We are already seeing the effects of these bots on people who need help emotionally. In 2024, a 14-year-old boy committed suicide after “speaking” with an AI character that was posing as a licensed therapist. His mother sued the AI company, alleging that the bot encouraged him to take his own life. 

In another case, a 17-year-old boy with autism began acting violently towards his parents after conversations with a bot that he thought was a psychologist. 

READ MORE: MOM BELIEVES AI CHATBOT LED SON TO SUICIDE. WHAT PARENTS NEED TO KNOW.

Some experts worry that these AI “therapists” have already become too normalized. In a study published in the PLOS Mental Health Journal, AI chatbots got higher ratings from those participating in the study than actual humans. 

“Mental health experts find themselves in a precarious situation: We must speedily discern the possible destination (for better or worse) of the AI therapist train as it may have already left the station,” the authors of the study wrote

Some states are taking action. California recently introduced a bill that would ban companies from developing and releasing AI bots that pretend to be certified as health providers. 

“Generative AI systems are not licensed health professionals, and they shouldn’t be allowed to present themselves as such,” state assembly member Mia Bonta, who introduced the bill, told Vox in a statement. “It’s a no-brainer to me.”

READ MORE: ‘ROGUE AIS’ CAN SELF REPLICATE — EXPERTS ISSUE THIS WARNING


Watch BEARS
Quality: - Content: +3
Watch THE LAST SHIP: Welcome to Gitmo
Quality: - Content: +1