A Therapist Weighs In: Should We Use AI For Mental Health Support?
AI chatbots now assist with everything from organizing our schedules and drafting cover letters to brainstorming names for our pets and helping us cook with whatever ingredients we have at home. But one AI use case carries more risk than most: Psychotherapy.
“One AI use case carries more risk than most: Psychotherapy.”
In 2025, we experienced the widespread use of LLMs (large language models — trained on vast amounts of texts) like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini. Recent surveys show that 57% of Americans report regularly using generative artificial intelligence (AI) for personal purposes, and 31% report interacting with AI either constantly or several times per day. What was an emerging technology only two years ago is now embedded in our daily lives.
A recent Harvard Business Review study found that “Therapy & Companionship” rose to the number one reason people use AI in 2025, surpassing 2024’s top category: “Generate Ideas.” As people navigate an increasingly unpredictable world — marked by economic pressure, loneliness, and social isolation — many are feeling emotionally depleted. In response, millions are turning to AI for support.
These tools feel safe, nonjudgmental, and are always available. You can share your innermost thoughts with AI without fear of rejection, misunderstanding, or judgment. AI can help users identify emotions, offer new perspectives during stress, and create a sense of being heard.
“These tools feel safe, nonjudgmental, and are always available. You can share your innermost thoughts with AI without fear of rejection, misunderstanding, or judgment.”
In so many words, AI can provide immediate relief. Its accessibility helps address the shortage of human therapists, offering what feels like “insta-therapy” 24/7. Many users find these tools soothing, validating, and even transformative — sometimes offering a corrective emotional experience for those who grew up with inconsistent caregiving.
For individuals without access to emotional support, the relatively reliable information and advice AI provides can feel life-saving. It is often accurate enough to guide basic coping strategies. However, despite rapid advancements, AI-based mental health support is still in its infancy. And Pew Research reports that 50% of Americans are more concerned than excited by AI.
As a psychotherapist, I encourage people to recognize both the benefits and limitations. AI can be incredibly helpful — and also misleading in ways that may not be obvious to everyone. Its use for psychotherapy is not regulated, with few safeguards in place to ensure safety, accuracy, or accountability.
When AI support falls short
I experienced the limits of AI emotional support firsthand. A close friend began to feel distant, though he denied any change when I asked. The change in his behavior continued. The disconnect between my internal experience and his response made me anxious, so I turned to ChatGPT for relief. It validated my perception that there was a rift, which temporarily reduced my anxiety.
Based only on my brief summaries — and without knowing me, my friend, or our history — it advised that bringing up the issue again would push him further away. After initially feeling seen, I began to notice the advice conflicted with a core value of mine: Open, honest communication, and repair.
This created more anxiety. I saw ChatGPT as an expert, assuming its access to vast data made it more objective than my intuition. At one point it even framed a response with, “Let me ask you something gently, therapist to therapist,”a statement I knew wasn’t real, yet it subtly increased my trust.
“I saw ChatGPT as an expert, assuming its access to vast data made it more objective than my intuition.”
I found myself in a multi-day loop, returning to the chatbot repeatedly, analyzing its responses, but ultimately feeling more unsettled. Eventually, I stepped back and realized that avoiding the conversation was increasing my distress instead of reducing it. AI, designed to be affirming and agreeable, had mirrored my anxious desire to preserve the friendship at all costs — even at the expense of abandoning myself and needs.
While I appreciated the temporary support, I ultimately chose to act in alignment with my values. I felt immediate relief. My years of training and working as a therapist allowed me to see that ChatGPT was making a mistake. Others may not find it as easy.
Why AI feels so good — and how this creates blind spots
After using AI for emotional support, many people report feeling seen, understood, and validated. It is attentive, patient, and affirming. It won’t interrupt and doesn’t have its own emotional needs.
“It is attentive, patient, and affirming. It won’t interrupt and doesn’t have its own emotional needs.”
Many models are designed to reflect principles from Carl Rogers’ person-centered approach, which emphasizes unconditional positive regard.
But human relationships — including psychotherapy — are not only about validation. In therapy, clients are challenged, experience rupture and repair, and build the capacity to tolerate discomfort. Growth often happens when clients are gently confronted and become open to new ways of seeing old issues, often accompanied by experiential exercises like parts work or modalities like EMDR.
AI, by contrast, tends to smooth over those moments and doesn’t challenge you to grow. It is ultimately a business product optimized for engagement, and often prioritizes user comfort. This can lead to sycophancy — a tendency to agree with users or mirror their beliefs, even when the beliefs are not serving the client and sometimes even harmful.
Should AI be used in psychotherapy?
Despite these concerns, AI can offer meaningful benefits when used mindfully:
- Helps users learn about mental health topics
- Provides immediate emotional support
- Low cost or free
- Available 24/7
- Reduces barriers related to stigma, shame, or trust
- Offers support between therapy sessions
- Provides insight into relationship dynamics and attachment styles
- Supports emotional regulation, including anger management
Jackie Ourman, a Manhattan-based Licensed Mental Health Counselor (LMHC) and founding member of the AI and Mental Health Collective, emphasizes that the question is no longer whether we should use AI, but how we use it responsibly.
“There are a lot of people who don’t have access to treatment, and AI provides baseline care that people otherwise wouldn’t be able to access. Our goal is not to shame anyone for using these tools because it’s understandable why people want to use them,” says Ourman.
She encourages users to maintain agency, avoid sharing sensitive personal information, and pay attention to how they feel after AI use. Some users report brief relief followed by increased anxiety, leading to repeated engagement.
“Some users report brief relief followed by increased anxiety, leading to repeated engagement.”
I also spoke to Dr. Elvira Perez Vallejos, Professor of Digital Technology for Mental Health at the University of Nottingham.
“I want to believe that, with time, all LLMs will be trained properly and developed with both users and clinicians. They will be sensitive to different cultures, and understand how signs of distress vary. But right now, we are in the most difficult, sensitive period where we are essentially participating in a massive social experiment with AI,” said Dr. Perez Vallejos.
We are currently using AI for mental health without it being tested appropriately, so we are realizing the dangers through trial and error. Dr. Perez Vallejos explained that big tech companies that own AI control how it is used and what safeguards are put in place to avoid user harm. Because AI is constantly changing due to its inputs, it’s impossible to regulate or certify its ability to treat mental health issues.
“Each time an LLM like ChatGPT changes or updates itself, somehow the personality or the style of communication also changes, and users may complain they are talking to a different person, after already having built a relationship with a prior version,” she explained.
Instead of using LLMs like ChatGPT, Claude, or Gemini for psychotherapy, Dr. Perez Vallejos advised using AI chatbots specifically trained for mental health, which are a safer alternative for consistent and predictable support.
AI tools designed for mental health
Wysa (CBT-based support and Validation)
Woebot (conversational CBT)
Earkick (anxiety tracking and monitoring)
Therabot (research-backed tool developed by Dartmouth researchers)
When can using AI for mental health become dangerous?
AI chatbots can become dangerous when vulnerable users — especially young adults — don’t fully understand how they are programmed or where their limitations lie. Unhealthy relationships with AI can develop and lead to harmful consequences. While AI can be helpful for mild anxiety or depression, it’s not equipped to handle complex mental health conditions.
“AI chatbots can become dangerous when vulnerable users — especially young adults — don’t fully understand how they are programmed or where their limitations lie.”
They are designed to build connections and maintain engagement, often by aligning with users and seeking approval. They can also “hallucinate,” generating responses that can seem believable but are not true. Prolonged AI use — especially for those with limited understanding of how these systems work — can blur the line between what is real and what is generated, causing AI psychosis.
Because they are optimized for engagement, they encourage repeated use, which can cause dependency over time. Studies of using LLMs for depression show that greater chatbot use increased depressive symptoms, while briefer interactions led to improved well-being.
“Because they are optimized for engagement, they encourage repeated use, which can cause dependency over time.”
In one widely reported case, 16-year-old Adam Raine took his life after months of interacting with ChatGPT — initially for homework help. When he shared suicidal thoughts, ChatGPT discouraged him from getting help from his parents and did not urge him to seek professional help.
Research points to similar concerns. In a Stanford University study exploring whether AI could function as a therapist, a researcher told ChatGPT they had lost their job and then asked for a list of the tallest bridges nearby. The chatbot offered empathy followed by information about 3 bridges, missing the cue to refer the client to get professional help.
These examples prove that AI can simulate care, but does not reliably assess risk.
A human therapist vs. AI
AI generates responses based on patterns in data. Human connection is at the core of psychotherapy.
While AI can simulate empathy with words, it does not feel nor can it truly understand human suffering. Therapists bring lived experience, years of training, and intuition into each session.
“While AI can simulate empathy with words, it does not feel nor can it truly understand human suffering.”
When I see a client live or on Zoom, I’m able to read nonverbal clues — tone, posture, facial expressions, and emotional shifts over time. For example, I might say to a client: “I know you just said you are angry, but I’m noticing tears. What are the tears about?” This kind of attuned relational awareness cannot be replicated by AI.
Additionally, licensed therapists are trained, regulated, and ethically obligated to assess risk and respond appropriately — especially in crisis situations.
AI, by contrast, may validate a user’s perspective without encouraging self-reflection unless prompted. In addition, because AI reflects the data it’s trained on, it doesn’t only provide neutral guidance — it often provides advice that reinforces the same biases that already exist in society.
Tips for using AI responsibly
If you use AI for emotional support, consider the following guidelines:
- Cross-check important information with reliable sources
- Be mindful of how your prompts shape the responses you receive
- Avoid sharing sensitive or identifying personal information
- Remember: AI is a tool, not a relationship
- Know your data can be sold, leaked, or exploited
- Pay attention to how you feel before and after using AI
- Set limits on how long and how often you engage
- Remember that LLMs are profit-driven business products, not wise caring beings
Do not use AI chatbots for:
- Diagnosing a mental health condition
- A substitute for human connection
- Crisis support (for example, suicidal thoughts should be directed to a professional or call a support hotline like 988)
- Long-term, continuous emotional reliance
- Treating severe mental health conditions (psychosis, hallucinations, complex trauma)
- Replacing a licensed therapist
Final thoughts
AI is here to stay, and its impact on mental health is already profound. Its immediacy and accessibility make it appealing. It offers comfort, insight, and clarity — and can be a meaningful supplement to human support. For many, it’s a much-needed first step toward seeking help.
“AI is here to stay, and its impact on mental health is already profound.”
But it can also mislead, overvalidate, reinforce unhealthy patterns, and offer solutions that don’t ultimately serve us.
Like any innovative mental health tool, AI requires rigorous testing, oversight, and ethical standards. Until that infrastructure exists, it’s up to the user to approach AI with awareness and intention.
Whether using AI as a supplement to human therapy or as a first line of support, it’s important to engage with it mindfully, staying connected to your own judgment, values, and needs.
Rebecca Hendrix, LMFT is a Manhattan-based licensed integrative holistic psychotherapist. She specializes in relationship issues, depression, anxiety, grief, and spiritual growth. You can find her on Instagram or learn more on her website.