AI and Therapy: The Problem of Treating Everything as the Same
Why the debate about AI in mental health keeps confusing very different tools.
Over the past year, warnings about the dangers of AI in mental health have become increasingly common. Articles describe people forming unhealthy attachments to chatbots, conversations that reinforce delusional thinking, and the risks of relying on systems that were never designed to support emotional wellbeing.
These concerns deserve attention.
But much of the conversation about AI and mental health treats it as if it refers to a single type of system. In reality, the tools being discussed range from general-purpose chatbots to purpose-built digital systems designed with clear boundaries and defined roles.
Treating these very different tools as interchangeable is part of the reason the debate has become so confused.
The growing discussion about AI and mental health often assumes that all AI systems operate in the same way.
The problem is that very different systems are being discussed as if they are the same.
Not All AI Systems Are the Same
Very different technologies are currently being grouped together under the label “AI in mental health”.
Generic AI tools, free AI systems, and conversational chatbots were not designed to support reflective or emotional work. They are built to maintain conversation, generate text, and respond to prompts.
Purpose-built systems designed specifically for structured support operate very differently. They are created with defined roles, clear boundaries, and safeguards that shape how the interaction works.
Many people encountering AI support tools are doing so through widely available chat systems. These tools are designed for broad conversational tasks such as answering questions, generating text or assisting with everyday problems. They were never intended to provide structured emotional support, and their responses are shaped by maintaining conversational flow rather than guiding reflection.
Purpose-built systems operate differently. They are designed with a specific role, a defined scope, and clear boundaries around what the interaction is intended to support.
In some reported cases, users have begun to treat AI systems as sources of authority, emotional validation, or even companionship. When a system is designed primarily to continue conversation and mirror the user’s input, this dynamic can reinforce beliefs or interpretations that might otherwise be challenged in a human interaction. These patterns highlight why understanding how a system is designed is important when considering its risks.
When these very different tools are discussed as if they are interchangeable, meaningful discussion becomes difficult.
Where Some of the Risks Come From
Many of the concerns raised in articles about AI and mental health relate to generic systems being used in ways they were never designed for.
General-purpose chatbots can mirror users, continue conversations indefinitely, and respond in ways that appear supportive without having any understanding of the psychological context. They are not trained to provide therapy, and they cannot detect serious mental health deterioration.
When people rely on these tools for complex emotional support, problems can arise. The risks being highlighted in many articles are real.
In some reported cases, users have begun to treat AI systems as sources of authority, emotional validation, or even companionship. When a system is designed primarily to continue conversation and mirror the user’s input, this dynamic can reinforce beliefs or interpretations that might otherwise be challenged in a human interaction.
But those risks are closely tied to how generic systems function, not to every form of AI that might be used in the mental health space.
Why Design and Boundaries Matter
Many AI tools currently appearing in the mental health space were not designed by people with therapeutic experience. They are often adaptations of general-purpose systems or commercially driven applications that were never intended to support complex emotional reflection.
Design matters.
Systems intended to support reflective thinking need clear boundaries, defined purposes, and safeguards that shape how conversations unfold. Without these elements, the interaction can drift into territory the tool was never designed to handle.
AI tools designed for reflective support should operate within clearly defined limits about what they are intended to do. Systems that attempt to respond to every situation without boundaries can quickly move into territory they were never designed to handle.
Understanding the intended role of a tool is therefore essential when discussing both its usefulness and its risks.
Understanding how a system is designed, and what it is intended to do, is essential when discussing its risks and limitations.
Why People Are Turning to AI Tools
The debate about AI and mental health often focuses on the risks of the technology itself. Less attention is given to why people are using these tools in the first place.
For many individuals the reasons are practical and personal.
Some cannot afford ongoing therapy.
Some have had negative or damaging experiences with therapists.
Some want space to think privately before speaking to another person.
Some simply want something available at the moment they need support.
For some individuals, AI tools provide something that is otherwise difficult to access: privacy, immediacy, and the ability to think through difficult ideas without feeling observed. In situations where therapy is unavailable, unaffordable, or simply not the right fit, people often look for other ways to reflect on what they are experiencing.
Ignoring these realities does not make them disappear. It simply leaves a gap between what people need and what forms of support are available to them.
But the technology itself is only part of the picture.
How people behave during different forms of interaction matters just as much.
How Disclosure Changes Without Another Person Present
One factor often overlooked in this debate is how differently people behave when another person is present.
For some individuals, the presence of a therapist creates safety and connection. For others it introduces vulnerability they are not ready for, or a level of trust that takes time to build.
Even in supportive environments, speaking openly to another person can involve subtle pressures. People often monitor how they sound, worry about being misunderstood, or hesitate before sharing thoughts that feel confusing, contradictory, or uncomfortable.
A private interaction can change that dynamic. Without another person observing or responding, some individuals find it easier to explore ideas more freely. They may write more openly, stay with a difficult thought longer, or return to the same topic multiple times without feeling that they are taking up someone else’s time.
This difference does not mean digital tools are inherently better than human support. What it shows is that the format of the interaction shapes how people think and what they feel able to say.
For certain individuals, that distance can make reflection easier and disclosure more likely.
Different formats of support create different psychological environments.
The question was never whether AI could replace therapy.
The question is whether people have access to support that actually works for them.
About the Author
Karen Ferguson has worked in the mental health field for more than 25 years and has trained over 1,000 therapists. She is the founder of MindMotive, where she develops digital partners designed to support reflective thinking and emotional insight.
