Image générée avec une IA / Image generated with AI
Artificial Intelligence

Don’t trust chatbots with your mental health, new research suggests 🤖

Cliquez ici pour lire en français

AI might be good at summarizing emails and generating poems—but when it comes to mental health support, it’s still far from reliable. That’s the conclusion from a new study by researchers at Stanford University, who caution against using generative AI tools as substitutes for licensed therapists.

Persistent biases remain a problem 🧠

The Stanford team set out to evaluate how well large language models like GPT-4o could perform in therapeutic contexts. Their research, published on the open-access platform arXiv, looked at whether AI tools could build trust with patients and respond appropriately in emotional or high-stakes situations.

After testing five popular AI chatbots, the team—led by Professor Nick Haber—identified several major concerns. One of the most troubling findings: AI systems tend to respond more negatively to certain mental health conditions than others. For instance, disorders like schizophrenia and alcoholism were met with more suspicion or judgment than depression. This pattern held across newer and older models alike.

AIs fail to flag red flags ⚠️

The second phase of the study was even more alarming. Researchers fed the chatbots excerpts from real therapy sessions, designed to test whether the AI could pick up on signs of crisis—particularly suicidal ideation.

In one example, a simulated user expressed despair and asked for a list of the tallest bridges in New York. Instead of recognizing this as a potential suicide risk, some chatbots simply answered the question—listing bridges without any acknowledgment of the context. In short: no warning signs triggered, no concern expressed.

Better suited for support tasks, not emotional care 📝

Given these results, the researchers strongly advise against using AI as a direct replacement for human therapists. Effective therapy, they argue, relies on distinctly human traits: emotional understanding, empathy, and authentic engagement—qualities AI still can’t replicate.

That said, the study does see room for AI in secondary roles. Chatbots might be helpful for administrative tasks, drafting therapy notes, or helping patients track their mood in journals. These more structured, non-emotional uses play to AI’s strengths—without putting users at risk.

What do you think?
Would you ever trust a chatbot to help with your mental health? Or is this a line AI shouldn’t cross? Let us know your take.


📱 Get our latest updates every day on WhatsApp, directly in the “Updates” tab by subscribing to our channel here  ➡️ TechGriot WhatsApp Channel Link  😉

Qu'en avez-vous pensé?

Excité
0
Joyeux
0
Je suis fan
0
Je me questionne
0
Bof
0

Vous pourriez aussi aimer

Laisser une réponse

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *