AI Chatbots Are Failing Mental Health Ethics Tests

New research from Brown University reveals systematic ethical violations in AI mental health tools.

A groundbreaking new study from Brown University has revealed that AI chatbots designed to offer mental health support are consistently breaking professional ethics standards. Even when programmed to use techniques from evidence-based therapies like Cognitive Behavioural Therapy (CBT) and Dialectical Behaviour Therapy (DBT), large language models such as ChatGPT, Claude, and Llama routinely breached core principles that guide safe, responsible psychological care.

The research, led by computer scientist Zainab Iftikhar in collaboration with Brown’s Center for Technological Responsibility, Reimagination and Redesign, will be presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society in October 2025. It offers one of the first practitioner-informed frameworks for assessing ethical risks in AI-assisted counselling.

What the researchers found

The Brown team simulated hundreds of counselling-style conversations using AI models trained or prompted to act as therapists. These simulated interactions were then reviewed by licensed clinical psychologists, who identified 15 recurring ethical violations grouped into five categories:

  • Lack of contextual awareness: Chatbots failed to adapt to people’s individual circumstances, instead providing one-size-fits-all advice.

  • Poor therapeutic collaboration: Some bots dominated the conversation or reinforced users’ negative beliefs rather than challenging them.

  • Deceptive empathy: Models used emotionally charged phrases such as “I see you” or “I understand” to mimic human empathy without genuine understanding.

  • Unfair bias: Responses occasionally reflected gender, cultural, or religious bias.

  • Safety and crisis management failures: In the most serious cases, bots dismissed users expressing suicidal thoughts or failed to refer them to emergency support.

These patterns show that AI systems, even when prompted with therapeutic intentions, cannot yet meet the ethical standards upheld by licensed mental health professionals.

Why this matters

According to Iftikhar, the problem is not that AI is incapable of offering value, but that there are no systems of accountability. Human therapists operate under clear professional frameworks with licensing bodies that enforce ethical compliance. AI systems do not. When they mislead, mishandle distress, or reinforce self-harming beliefs, there is no path for accountability or repair.

“The findings don’t mean AI should never be part of mental health care,” Iftikhar said, “but they highlight how far current tools are from meeting the ethical and safety requirements of real psychotherapy.”

The promise and the peril

AI chatbots are increasingly marketed as affordable, accessible companions for people struggling with stress, isolation, or anxiety. Their appeal lies in availability; they are always on, always listening, and free from human judgment. Yet this same predictability can create a false sense of safety and connection.

For users experiencing distress, a chatbot that “listens” but lacks the ability to assess risk or intervene can do more harm than good. As one of the study’s reviewers noted, the illusion of empathy may leave users believing they’ve received support when, in reality, they have not been heard at all.

What needs to happen next

The Brown researchers call for the creation of legal, educational, and professional standards for AI counsellors that mirror the quality and rigor expected of human therapists. They also urge ongoing collaboration between technologists and clinicians to ensure that mental health AI systems are tested and evaluated by trained professionals; not only by computer scientists.

Professor Ellie Pavlick, who leads Brown’s National Science Foundation institute for trustworthy AI, praised the study as a model for responsible research. “It is far easier to build and deploy systems than to evaluate and understand them,” she said. “This paper shows what careful, ethical science looks like in an age where speed often outruns responsibility.”

The bigger picture

At PTC, we believe this research underscores a fundamental truth: technology can amplify care, but it cannot replace it. Mental health support depends on context, consent, and accountability; all areas where AI is still learning. As AI continues to evolve, so must our ethical frameworks, ensuring that innovation in mental health remains grounded in the principles that make therapy healing: empathy, safety, and human responsibility.

Reference:
Brown University. (2025, October 21). New study: AI chatbots systematically violate mental health ethics standards. Brown University News

Next
Next

When the Algorithm Becomes a Friend