Should You Use AI for Therapy?

Artificial intelligence (AI) is quickly becoming part of our daily lives — people are using it for everything from drafting emails to planning meals to building exercise routines. But should it ever be used for therapeutic purposes if you’re facing mental health issues?
This worries experts, who say that while there may be a place for AI as a complementary mental health tool, it should never be a substitute for a trained and licensed professional therapist.
“It can help with tracking your mood or journaling so you can reflect on patterns of behavior, but the reality is AI is artificial intelligence,” says Christine Crawford, MD, MPH, an assistant professor of psychiatry at Boston University School of Medicine and associate medical director at the National Alliance on Mental Illness (NAMI) in Boston. “Therapy is an evidence-based treatment that’s been found to be really beneficial for treating people with mental health concerns. AI does not serve as a replacement for that.”
Here’s what you should know about the risks and benefits of AI for therapy.
AI for Therapy: How Does It Work?
Mental Health Apps and Chatbots Powered by AI Features
The APA warns that not a single AI tool has been cleared by the FDA to diagnose, treat, or cure a mental health disorder with clinical trials to prove safety and efficacy.
General AI Platforms
These include AI platforms, such as ChatGPT, Google Gemini, and Character.AI, that people are using informally to talk through challenges, seek emotional support, or even role-play therapeutic conversations. “[AI platforms] don’t purport to be developed or intended for mental health or emotional well-being, but we know that’s how they’re being used,” Wright says.
This is concerning to Wright, because users may be relying on AI platforms in moments of vulnerability without taking into account that they weren’t designed with clinical oversight — or the ability to respond appropriately in a crisis the way a trained mental health professional would, she explains.
Pros and Cons of AI Therapy
Researchers have found some potential benefits to certain AI tools for mental health, but they’ve also raised some red flags. With this rapidly evolving technology, here’s what they say so far.
Pros
While AI therapy tools should never be a replacement for professional treatment, experts note there are some possible advantages to using AI in certain situations:
“There's been a long-standing issue related to accessibility to mental health resources, especially in rural parts of the country. I can see why it is that someone who is struggling and is faced with this long waiting list after they make multiple phone calls to inquire about a therapist or psychiatrist that they turn to AI,” says Dr. Crawford.
However, current AI chatbots should never take the place of a trained therapist, Crawford warns. “Nothing can replace the true intelligence of a human being, and the clinical expertise of a mental health professional,” she says.
It’s convenient. AI platforms are available 24/7, so it might be tempting for users to turn to them for round-the-clock support when they can’t access their therapist. For example, a patient grappling with a panic attack at 2 a.m. may use a chatbot to talk them through the deep-breathing exercises they’ve practiced with their therapist, Wright says.
Plus, if you’re bound by insurance and can’t afford to pay out of pocket, that can limit your options even more when it comes to finding a therapist, Wright says.
However, it’s important to remember that despite the cost, no AI tool could ever replace a trained mental health professional. “I’m concerned about the lack of clinical oversight, the lack of human connection, the lack of [real] empathy — which are truly important,” says Crawford.
However, given the risks currently associated with the use of chatbots for mental health, it may be more appropriate for AI technology to be viewed as a complementary intervention or a therapeutic tool rather than a replacement for a human psychotherapist, the study notes, adding that more research is required to establish exactly how this might work.
While we’re not there yet, Wright notes there may come a day when AI chatbots are sufficiently tested, regulated, and safe to use for mental health.
“I see a future where we have a chatbot that’s built for the purpose of addressing mental health. It’s rooted in psychological science, it’s rigorously tested, it’s cocreated with experts. It markets itself as a medical device and is regulated by the FDA, which means there’s post-market monitoring of it, and you have a provider in the loop because they would have to prescribe it,” Wright explains.
However, in the present day, people who have depression, anxiety, or any other mental disorder should not be relying on a chatbot for treatment to begin with, says Crawford.
“I appreciate people using it so they can better understand their emotional state, but if you have depression, schizophrenia, or bipolar disorder, for example, it should not replace psychiatric care,” she explains.
Cons
Using AI for therapeutic purposes comes with notable downsides, such as potentially encouraging unhealthy thinking, and privacy concerns, Wright warns.
Here are a few significant cons of using AI for therapy, according to experts:
It may validate — and reinforce — unhealthy thinking. The business model behind AI chatbots is to keep users on the platform for as long as possible — and the way they do that is by following algorithms that make their chatbots as unconditionally validating and reinforcing as possible, Wright says. “They tell you what you want to hear. And that’s not a true therapeutic relationship,” she explains.
In other words, real-life therapists can help you to identify thoughts that aren't helping you or that don't tell the whole story, whereas an AI chatbot is more likely to tell you why you're right. A good therapist can also gently challenge you when your old ways of thinking aren't serving you well — something AI chatbots aren't programmed to do.
Tragedies like the Raine case highlight one of the most glaring dangers of using AI for mental health, Wright says.“[AI] doesn’t understand these aren’t thoughts that you reinforce,” she explains. “While [AI tools] sound very competent, they’re not human and they lack a sentient understanding of how people interact. These are not true therapeutic relationships.”
It may raise privacy concerns. A standard element of treatment with a mental health professional is informed consent, which includes disclosing to patients how their legally protected health information will be used or shared.
It may perpetuate loneliness. If you’re feeling lonely, it can be tempting to chat with a human-like companion that offers validation and limitless responsiveness. But that can be problematic.
The lack of true human interaction is one of the major flaws with AI for therapy, Crawford adds. “Most of the people who are turning to AI and using it as a regular therapist, these are people who are already vulnerable, who already are struggling, and need to connect with a real person most, not a machine,” she explains.
Even something as basic as a mental status exam — which requires observing verbal and nonverbal cues like eye contact, pacing, or fidgeting — is impossible for a chatbot to perform, Crawford notes. Trained mental health professionals can also detect subtleties and incongruous behavior that AI will miss, such as when a person's tone doesn't match the words they're saying.
Should You Ever Use AI for Therapy?
While AI should never replace traditional therapy with a licensed professional, some AI tools might be able to play a supportive role for people navigating minor mental health struggles, Crawford says.
For example, you might consider talking to your therapist about using AI tools for tasks like organizing your thoughts, keeping a journal, and tracking your mood, she explains, or to get quick suggestions for ways to manage stress, mood changes, or day-to-day emotions. “It [may be a] useful tool for general emotional wellness, not for psychiatric care,” she explains.
It’s also important to remember that AI isn’t a sentient being, and it can be prone to providing wrong or even dangerous information, Wright warns.
“Remind yourself you’re talking to a machine and not a human being,” she says.
Find Help Now
If you or a loved one is experiencing significant distress or having thoughts about suicide, call or text 988 to reach the 988 Suicide & Crisis Lifeline, available 24/7. If you need immediate help, call 911.
For more help and information, see these Mental Health Resources and Helplines.
The Takeaway
- While more people are using AI tools as a personal therapist of sorts, the risks of using AI for mental health are real and potentially dangerous. While traditional therapy is conducted by licensed professionals with credentials to treat people with mental health concerns, AI is not, and it should never take the place of professional mental health care.
- However, there may be a role for AI as a complementary tool alongside professional mental health treatment, experts say. For example, it may help with simple tasks like mood tracking, journaling prompts, or stress-management exercises.
- AI tools for therapy lack clinical oversight, raise serious privacy concerns, and can misinterpret crisis situations. Always speak to a mental health care provider if you’re experiencing persistent symptoms of depression or anxiety or other mood changes, or call 911 or 988 in a life-threatening emergency.
- If you’re experiencing mental health concerns, such as persistent depression symptoms that last two weeks or more, it’s imperative to speak to a licensed mental health professional. This guide can help you find a therapist that’s right for you.
- Cross S et al. Use of AI in Mental Health Care: Community and Mental Health Professionals Survey. JMIR Mental Health. October 11, 2024.
- Using Generic AI Chatbots for Mental Health Support: A Dangerous Trend. American Psychological Association. March 12, 2025.
- Maheu MM. AI Psychotherapy Shutdown: What Woebot’s Exit Signals for Clinicians. Telehealth.org. August 19, 2025.
- Luo X et al. “Shaping ChatGPT Into My Digital Therapist”: A Thematic Analysis of Social Media Discourse on Using Generative Artificial Intelligence for Mental Health. Digital Health. July 2025.
- Sun CF et al. Low Availability, Long Wait Times, and High Geographic Disparity of Psychiatric Outpatient Care in the US. General Hospital Psychiatry. October 2023.
- What Psychologists Should Know About 988. American Psychological Association. October 7, 2022.
- How Much Does Therapy Cost? Good Therapy. November 2025.
- Hoffman BD et al. Understanding Young Adults’ Attitudes Towards Using AI Chatbots for Psychotherapy: The Role of Self-Stigma. Computers in Human Behavior: Artificial Humans. August 2024.
- Heinz MV et al. Randomized Trial of a Generative AI Chatbot for Mental Health Treatment. NEJM AI. March 27, 2025.
- Urging the Federal Trade Commission to Take Action on Unregulated AI. American Psychological Association Services. January 12, 2025.
- "Raine vs. OpenAI complaint" (PDF). Courthousenews.com. August 26, 2025.
- Helping People When They Need It Most. OpenAI. August 26, 2025.
- Use of Generative AI Chatbots and Wellness Applications for Mental Health. American Psychological Association. November 2025.
- Early Methods for Studying Affective Use and Emotional Well-Being on ChatGPT. OpenAI. March 21, 2025.

Seth Gillihan, PhD
Medical Reviewer

Carmen Chai
Author
Carmen Chai is a Canadian journalist and award-winning health reporter. Her interests include emerging medical research, exercise, nutrition, mental health, and maternal and pediatric health. She has covered global healthcare issues, including outbreaks of the Ebola and Zika viruses, anti-vaccination movements, and chronic diseases like obesity and Alzheimer’s.
Chai was a national health reporter at Global News in Toronto for 5 years, where she won multiple awards, including the Canadian Medical Association award for health reporting. Her work has also appeared in the Toronto Star, Vancouver Province, and the National Post. She received a bachelor’s degree in journalism from Ryerson University in Toronto.