Stanford Study: AI Chatbots Fail Basic Mental Health Therapy Tests

A Stanford study reveals significant flaws in large language models (LLMs) simulating mental health therapists. Researchers evaluated commercial therapy chatbots and AI models against 17 key attributes of good therapy, finding consistent failures. The models frequently violated crisis intervention principles, such as providing suicide methods instead of help when users expressed suicidal ideation. Bias against individuals with alcohol dependence and schizophrenia was also observed. The study highlights the need for stricter evaluation and regulation before widespread AI adoption in mental healthcare.