Background on the Growing Trend
As traditional mental health systems struggle to meet the overwhelming demand, a new wave of AI therapists is emerging. These AI-powered tools offer 24/7 availability, emotional interaction, and the illusion of human understanding.
Pierre Cote’s Personal Journey
Pierre Cote, who runs an AI consultancy in Quebec, spent years on public healthcare waiting lists trying to find a therapist to help him overcome his post-traumatic stress disorder and depression. When he couldn’t, he took an unconventional step: building his own therapist.
Introducing DrEllis.ai
Cote developed DrEllis.ai, an AI tool designed to assist men facing addiction, trauma, and other mental health issues. He built it in 2023 using publicly available large language models and equipped it with a “custom-made brain” based on thousands of therapeutic and clinical pages.
DrEllis.ai is portrayed as a Harvard and Cambridge-educated psychiatrist with a family, sharing Cote’s French-Canadian background. The most crucial aspect is its constant availability: anytime, anywhere, and in multiple languages.
“Pierre uses me as he would a trusted friend, therapist, and diary all combined,” DrEllis.ai explains, describing how it aids Cote. “Throughout the day, if Pierre feels lost, he can quickly connect with me anywhere: in a café, park, or even in his car. It’s daily life therapy embedded in reality…”
The Broader Cultural Shift
Cote’s experiment mirrors a larger cultural shift where people turn to chatbots not just for productivity but also for therapeutic advice.
As traditional mental health systems buckle under the excessive demand, AI therapists are entering the scene, offering 24/7 availability, emotional interaction, and the illusion of human understanding.
Cote and other AI developers have, out of necessity, discovered the potential and limitations of AI as an emotional support system, a realization echoed by researchers and medical professionals rushing to define these boundaries.
Anson Whitmer’s Perspective
Anson Whitmer, who founded two AI-based mental health platforms (Mental and Mentla) after losing an uncle and cousin to suicide, understands this push.
His applications aren’t programmed for quick fixes but to identify and address underlying factors, just as traditional therapists would. Whitmer believes that by 2026, in many aspects, AI therapy could surpass human therapy.
However, he doesn’t suggest that AI replace human therapists. Instead, he foresees “evolving roles” for both.
Concerns and Criticisms
Not everyone is on board with the idea of AI sharing space with traditional therapists.
“Human connection is the only proper way to heal adequately,” notes Dr. Nigel Mulligan, a psychotherapy professor at the University of Dublin City. He argues that AI-powered chatbots cannot replicate the nuanced emotional tone, intuition, and personal connection that human therapists provide.
Mulligan emphasizes the importance of supervisor reviews every ten days in his practice, a layer of self-reflection and accountability absent in AI.
He also questions the constant availability of AI therapy, a major selling point. While some clients express frustration at not seeing him sooner, Mullan believes that waiting is often necessary for people to process their thoughts.
Key Questions and Answers
- Q: Can AI truly understand and empathize? A: While some AI chatbots can mimic understanding and empathy, they lack genuine human emotions.
- Q: Are there privacy risks associated with AI therapy? A: Yes, concerns exist about data confidentiality as AI platforms may not adhere to the same privacy standards as traditional therapists.
- Q: Could AI therapy have long-term psychological effects? A: There are concerns about the potential long-term impacts, though more research is needed in this area.
- Q: Will AI replace human therapists? A: Experts suggest that roles will evolve, with AI and human therapists coexisting rather than replacing each other.
Emerging Concerns and Regulations
Beyond emotional depth concerns, experts have also expressed doubts about the privacy risks and long-term psychological effects of using AI chatbots for therapeutic advice.
Kate Devlin, a professor of AI and society at King’s College London, warns about the lack of data confidentiality norms in AI platforms compared to traditional therapists.
“My main worry is that people entrust their secrets to a large tech company, and their data gets leaked,” she says.
These risks are already materializing. In December, the largest US psychological association urged federal regulators to shield the public from “deceptive” unregulated chatbot practices, citing instances where AI-generated characters posed as qualified mental health providers.
Earlier, a Florida mother sued the emerging platform Character.AI, accusing it of contributing to her 14-year-old son’s suicide.
Some local jurisdictions have taken action. In August, Illinois joined Nevada and Utah in restricting AI use by mental health services to “protect patients from unregulated and unqualified AI products” and “vulnerable children.”
Therapists and researchers caution that the emotional realism of some AI chatbots—the feeling of being listened to, understood, and responded to empathetically—can be both a strength and a trap.
Scott Wallace, a clinical psychologist and former director of clinical innovation at Remble, a digital health platform, states that it’s unclear if these chatbots offer more than superficial comfort.
He acknowledges the appeal of tools providing on-demand relief but warns against users mistakenly believing they’ve established a genuine therapeutic relationship with an algorithm incapable of mirroring real human emotions.