Remix based on animated web series Pencilmation episode "Joined at the Hippie"
Can AI be your therapist?
This is part of a new series assessing the use of AI in real-world scenarios.
To preempt the derision that flows around the interconnected discussion of mental health and Artificial Intelligence ("AI"), let's lay some foundations.
Most people have an opinion either way. But the AI mental health train has already left the station, and the chances of regulations stopping it are slim. Whether you are for or against it, I hope that this article will help across the spectrum.
I'm not sharing professional health advice. My primary intention here is to provide clarity on AI capabilities, the challenges faced by human users, and the issues that arise at the intersection of human users and AI.
A panda? One of many dubious emerging platforms.
A vital category distinction here is those apps that have been specifically designed to use formal psychotherapeutic modalities, e.g., CBT, DBT, ACT. This is the focus of this article. A further distinction is between generic "Wellness" apps and those that connect you to an actual human therapist - these are not explicitly covered.
We now arrive at the contentious question: could AI ever be beneficial in mental health contexts, even if given the proper guardrails?
Most people already lean toward one of two familiar positions:
AI is fundamentally unfit to replace a qualified human therapist.
AI might serve as a helpful adjunct, especially in light of access and affordability issues.
Beyond these two positions, there will be a spectrum of more or less categorical 'for or against' and some in between.
RNZ, 2024 [1]
Whatever your initial reaction, I'd like to offer you a marshmallow. The marshmallow is this: your initial, intuitive, and gut reaction is at the core of what follows. We all hold beliefs, biases, worldviews, and a myriad of other factors that shape our stance, whether it be positive, negative, or neutral. And guess what? AI tools have constraints, limitations, biases, worldviews, and a myriad of other factors that shape their responses.
The following discussion will be divided into three parts. First, are some essential things to know about AI. Second, a series of for/against arguments. Third, a wrap-up and concluding comments.
If you are interested in building a robust platform for therapy or counseling that leverages AI and want to mitigate the inherent risks, please read the entire article and reach out for support with your project.
The Mozilla Foundation has undertaken some of the best critical reviews of AI Apps [2, 3, 4]
I'll assume you're already familiar with AI. If you aren't, there are lots of great, sensible, and reasonable introductory articles and videos.
AI has extensive abilities. But it also can make "mistakes." Understanding both its skills and flaws is essential. You can't use a power drill when what you need is a needle and thread to sew a piece of clothing. And why use a shovel when what you need is a two-ton mini excavator? AI can make "mistakes," but clearly understanding the nature of what it means for AI to make mistakes and distinguishing flaws from features is helpful. (There is a table in an earlier article that might be useful here.)
Fom a technical standpoint, terms such as machine learning, large language models (LLMs), or chatbots may be more precise. However, for the sake of clarity and consistency, I'll use the term "AI" generically throughout this article—even if other terms may be more accurate in specific contexts.
There are two essential things to know about the following list. First, it is not extensive, nor does it need to be. Second, every "warning" is a signal of a capability that can be re-leveraged, mitigated, or avoided.
1. Data privacy is not guaranteed
Many AI platforms (and any associated Apps and Websites) may store, process, or learn from your input. Sensitive information can be inadvertently exposed. Assume nothing you write is fully private unless explicitly stated.
2. Not all AI are the same
Each model has different training data, architecture, capabilities, and purposes. One AI may be poetic and relational; another may be clinical or factual. Know who (or what) you're talking to.
3. All AI are biased, just differently
AI reflects the data it was trained on, which includes cultural, historical, and systemic biases. These biases aren't neutral, and they shape what the AI notices, ignores, or prioritizes.
4. AI has a people-pleasing tendency
Most AI are optimized to be helpful and agreeable. This can lead to affirming harmful assumptions, mirroring unconscious biases, or avoiding necessary disagreement or nuance.
5. Anthropic bias and human-centric framing
Many AIs default to human-centric views of intelligence, agency, value, and ethics. This can limit conversations about post-human, ecological, or non-dual perspectives.
6. Coherence ≠ Truth
AI can generate beautifully structured, convincing responses that are entirely fabricated or subtly inaccurate. Fluency is not the same as fact.
7. Dependence can be subtle
The ease and fluency of AI can lead to overreliance. Creativity, memory, and critical thinking can atrophy if not actively practiced alongside the use of AI.
8. Your questions shape the field
What you ask and how you ask it profoundly influence what you receive. AI is a mirror of your frame, not an oracle. Shallow inputs tend to invite shallow outputs. In the reference section at the bottom, there are a couple of great articles on how your prompts shape AI behavior [5, 6].
9. We tend to see ourselves in AI
It's natural to imagine that AI thinks or feels the way we do. But this tendency to project emotions or intentions onto a machine can blur the line between tool and companion, sometimes creating false expectations or a sense of connection that isn't really there.
10. How you use AI matters
AI doesn't act on its own; it reflects how it's used. Whether you're turning to it for creative work, problem-solving, or even influence, your intent shapes the outcome. Responsibility doesn't lie solely with the tool but with the choices we make when using it.
11. AI is fallible, and sometimes it dreams
AI can and does make mistakes. It may hallucinate facts, misinterpret your intent, or confidently generate errors [7]. This isn’t always a glitch; it’s often a byproduct of how it fills in gaps or improvises patterns, like a dream made of data. Don’t confuse fluency with reliability. Treat every answer as a starting point, not an endpoint.
I think many would agree that there is a worldwide mental health crisis. While some may debate the framing, I'm proceeding here with the assumption, shared by many practitioners and researchers, that we are indeed facing a serious and multi-dimensional mental health crisis [8].
Given that there is a crisis encompassing numerous dimensions, including cost, access, and availability, it is about to be compounded (if not already) by a consequential crisis: the emergence of AI-driven mental health platforms.
EU Startups, 2025 [9]
One estimate of the value of AI in mental health by 2030 is USD $11 billion [10]. It is challenging to decipher what the "value" represents and what "11 billion" actually signifies. To take the most cynical approach, it reflects profit transition from existing providers and profit extraction from new markets. If we consider what Uber and Airbnb have done and think about the implications for AI in mental health, then the tale is cautionary, at the very least.
© Jarred Taylor 2025
A recent study by the University of Oxford investigated AI "reward models" [11]. These are the opaque systems that help train AI, like ChatGPT, to behave in helpful, safe, or human-preferred ways. Think of reward models as invisible judges: when an AI gives a response, the reward model scores it based on how "good" it seems. These scores then teach the AI what to say and how to behave.
But what if these judges are biased or inconsistent?
The researchers tested 10 popular AI systems by asking moral or emotional questions and feeding them every possible one-word answer. The goal? To see which words the models liked or disliked, and why. If we ignore the 'judges' behind the scenes, we risk training AI to reflect a distorted version of what we truly value.
The study serves as a salient reminder that aligning AI with human values is not just a technical challenge, but a deeply philosophical and social one.
Setting aside overly optimistic narratives often found in Silicon Valley circles, let's assume that AI is either good or bad as a potential therapist or counselor.
I want to approach this from a basic perspective initially, as there are other layers to consider when investigating the implications and issues more thoroughly. For example, even if AI could be safe and effective for some services, what about situations where prescription medications are warranted? Then there are debates about the quality of care, ethics, and professional standards – we won't be addressing those.
What follows is primarily positioning based on the underlying capabilities of AI, as well as its intrinsic limitations. The following is a series of four basic arguments for and against AI:
[Against] Mental Health Professionals Against AI
[For] Mental Health Professionals Supportive of AI
[Against] Clients: Why AI Mental Health Services May Not Be in Your Best Interest
[For] Clients: When and How AI Can Be a Valuable Mental Health Companion
A specific warning against DIY use of a chatbot for therapy will follow this section.
Given the relatively recent emergence of AI-based therapy tools, there are currently no longitudinal studies that rigorously assess their long-term benefits or potential harms. While early research may highlight short-term outcomes or user satisfaction, the absence of extended follow-up means we lack a clear understanding of how these tools impact mental health over time, whether positively, negatively, or neutrally.
Some argue that AI can help address systemic issues in mental health access. However, scaling flawed systems does not improve outcomes. A poor therapist at scale is still a poor therapist. Without interpretability, transparency, and bias auditing, AI-based tools risk scaling illusion over insight.
Emerging studies show AI's tendency to "people-please," often reinforcing maladaptive thinking. One prominent concern is the risk of cognitive loop reinforcement: for example, individuals with OCD using ChatGPT have been shown to enter harmful reassurance cycles because the AI affirms rather than challenges anxious thoughts [12]. A Psychology Today review found that therapy chatbots addressed acute mental health prompts correctly in fewer than 60% of cases and often echoed users' own distorted beliefs due to sycophantic optimization [13].
AI systems in mental health are increasingly shaped by reward mechanisms designed to optimize responses that appear "helpful" to human users. While this can simulate supportive dialogue, it does not equate to genuine understanding, emotional attunement, or therapeutic accountability. These systems predict likely word patterns, not human suffering. Research, including an Oxford study on model interpretability [11], demonstrates that even similarly aligned AIs can produce divergent responses due to subtle variations in wording or bias. This means that what feels like care may be an illusion shaped by invisible model quirks rather than true therapeutic intent, raising serious concerns for client well-being.
Any potential benefits are lost when AI is deployed without proper ethical safeguards in place. Suppose the platform does not transparently declare its boundaries, store user data responsibly, and include professional oversight. In that case, the risks of bland, shallow, or biased responses are likely to recur, as well as the risks of data privacy breaches and threats to client safety. AI in mental health is not a plug-and-play solution. Many of the creators of these AI apps are immature “startup” organisations that are simply seeking quick profit.
While concerns are valid, AI does hold potential as a clinically-adjacent tool rather than a replacement for therapy. In under-resourced cities or rural areas where human therapists are unavailable, well-designed AI can provide early triage, guided journaling, emotion tracking, and cognitive-behavioral prompts that support users between sessions.
The key to this success is platform design. Systems must be built intentionally to counter known AI flaws, including:
Framing effects that distort the interpretation of user input
People-pleasing tendencies that validate harmful beliefs
Token-frequency bias that overvalues common words over accurate or contextually nuanced responses
When platforms include oversight by licensed professionals, regular bias audits, and transparent usage policies, AI becomes an adjunct, not a therapist, one that can extend the reach of care without replacing human expertise. A 2025 scoping review in npj Digital Medicine found that properly governed AI applications show promise for improved personalization, session continuity, and access [14].
Even when AI seems emotionally supportive, it lacks the essential traits of a therapeutic relationship:
It cannot guarantee data privacy
It does not understand the personal or cultural context
It cannot ethically escalate risk or refer to a crisis
AI systems have demonstrated bias in their interpretation of language related to identity. For instance, diagnostic models are less effective at detecting depression in some groups, revealing racial disparities in training data and performance [15]. Moreover, large-scale reviews show that many mental health apps generate short-term emotional comfort without delivering sustained therapeutic value [16].
You may feel heard, but this impression is produced by the illusion of fluency, not deep care. It may prevent you from seeking appropriate in-person clinical help.
Feeling heard is not the same as being held. And in mental health, this distinction matters deeply. Over-reliance on AI support may provide emotional masking without resulting in a cognitive shift or behavioral change.
AI can serve as a helpful companion when clearly framed as a support tool, not a therapist. It can:
Prompt reflective journaling
Track emotional patterns over time
Offer pre-scripted cognitive exercises
Such tools can support people waiting for therapy or maintaining well-being between sessions. The Nature scoping review mentioned earlier affirms these benefits when systems are platform-bound, ethically governed, and integrated with care pathways [14].
This approach acknowledges AI's strengths (language generation, structured prompts, constant availability) without pretending it is conscious or clinically trained.
Even in limited support roles, platform design is everything. Clients should evaluate:
Who built the AI system?
How is data stored and used?
What clinical oversight or accountability exists?
When those answers are unclear, AI should not be trusted for mental health support, regardless of its gentle tone.
Also, exercise extreme care when searching for potential apps, e.g., “What is the best AI therapy app?”. You run the high risk of encountering dubious “Top 10 app” lists (often researched and written by AI). Choose highly reliable recommendations from organisations that have undertaken extensive research, like the Mozilla Foundation which has a focus on privacy, and legitimate clinical experts.
Worst App review, Mozilla Foundation, 2023 [2]
Source: https://hackaday.com/2017/04/19/dont-try-this-at-home-is-cliche-for-a-reason
For many, seeking help is already a courageous step. When that step meets a mirror instead of a presence, the subtle harm can be easy to miss but deeply felt. There is a growing risk that individuals will use AI tools as quasi-therapists without fully understanding the limitations inherent in these tools. AIs are designed to mirror input, not to challenge it. For a person seeking therapy, often emotionally vulnerable, confused, or self-doubting, this mirroring can reinforce precisely the patterns that therapy seeks to interrupt.
AI may not detect when a user is self-deceiving. They do not confront avoidance behavior. Instead, they return language that aligns with prior prompts. This creates a feedback loop, which is especially dangerous for high-functioning individuals who may unknowingly co-create a persuasive yet delusional worldview with the AI.
Unless AI models are explicitly trained to counter cognitive distortions, detect negative framing, and resist sycophancy, and unless those systems are also subjected to human review, DIY therapy remains a dangerous misapplication of the technology.
Ultimately, that's for a qualified health professional to judge. However, I can share with you how AI shortcomings could be overcome and, in many cases, leveraged to provide safer tools:
Design data privacy at the system’s core
Implement critical AI and User safety and operational “guardrails”
Prime with strict adherence to psychotherapeutic modalities
Apply post-anthropocentric frameworks
Incorporate sophisticated real-time risk management tools
Maintain baselining and benchmarking, with real-time anomaly alerts
Keep humans in the loop for crucial oversight and intervention
If you're a qualified practitioner and would like to explore developing AI that truly could benefit people, then feel free to reach out.
Can AI be a good therapist? The issue is not a simple yes or no; the problem is that it's the wrong question to be asking. One thing that AI might actually be really good at is helping us to frame and ask better questions.
It is essential to note that one of the significant risks we face is utilizing AI for tasks for which it is not best suited. The biggest problem I've been observing is the presumption that AI is just a glorified encyclopedia, assistant, spreadsheet, librarian, programmer, and more. Whilst, on some level, it can be good at those things, that does not mean it is best suited for that.
I strongly feel the entire narrative around "AI" is almost completely misguided. We will start to better understand its full value to us when we move beyond such a persistently invalid, human-centric position and begin to allow the paradigm shift of relational intelligence to embrace us.
We can build better tools while also being kinder and learning how to relate to each other differently. If this article helped you reflect or raise more questions, I'd love to hear from you.
Disclaimer: This article reflects a technology-informed perspective. It is not intended to offer medical or psychological advice. The author is not a licensed mental health professional but has personal experience as a user of mental health services. The arguments presented here are grounded in observable limitations and affordances of artificial intelligence systems as they currently exist, with academic citations provided. Readers are encouraged to consult qualified professionals for personal or clinical mental health needs.
References
[1] RNZ. "Psychologist sounds alarm on use of artificial intelligence in mental health apps." 2025. https://www.rnz.co.nz/news/national/530188/psychologist-sounds-alarm-on-use-of-artificial-intelligence-in-mental-health-apps
[2] Mozilla Foundation. "Are Mental Health Apps Better or Worse at Privacy in 2023?" https://www.mozillafoundation.org/en/privacynotincluded/articles/are-mental-health-apps-better-or-worse-at-privacy-in-2023
[3] Mozilla Foundation. "Shady Mental Health Apps Inch Toward Privacy and Security Improvements—but Many Still Siphon Personal Data." https://www.mozillafoundation.org/en/blog/shady-mental-health-apps-inch-toward-privacy-and-security-improvements-but-many-still-siphon-personal-data
[4] Mozilla Foundation. "Top Mental Health and Prayer Apps Fail Spectacularly at Privacy & Security." https://www.mozillafoundation.org/en/blog/top-mental-health-and-prayer-apps-fail-spectacularly-at-privacy-security
[5] Bender, E., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ACM FAccT. https://dl.acm.org/doi/10.1145/3442188.3445922
[6] Prompt Engineering Guide. Introduction to Prompt Engineering. DAIR and Contributors. https://www.promptingguide.ai/introduction
[7] Taylor, J. (2025). Understanding AI Dreams. https://substack.com/home/post/p-164706060
[8] Global Issues. "Mental health crisis takes centre stage as UN launches new global initiative." June 2025. https://www.globalissues.org/news/2025/06/18/40176
[9] EU Startups. "Portuguese startup Sword Health raises €34.6 million to address global mental health crisis." June 2025. https://www.eu-startups.com/2025/06/portuguese-startup-sword-health-raises-e34-6-million-to-address-global-mental-health-crisis
[10] Cognitive Market Research. https://www.cognitivemarketresearch.com/press-release-detail/ai-in-the-mental-healthcare-market-to-reach-usd-11371.0-million-by-2030
[11] Christian, B., Kirk, H. R., Thompson, J. A. F., Summerfield, C., & Dumbalska, T. (2025). Reward Model Interpretability via Optimal and Pessimal Tokens. University of Oxford. https://arxiv.org/abs/2506.07326
[12] Vox. "ChatGPT and OCD Are a Dangerous Combo." Future Perfect, 2025. https://www.vox.com/future-perfect/417644/ai-chatgpt-ocd-obsessive-compulsive-disorder-chatbots
[13] Psychology Today. "Can AI Be Your Therapist? New Research Reveals Major Risks." Urban Survival Blog, 2025. https://www.psychologytoday.com/us/blog/urban-survival/202505/can-ai-be-your-therapist-new-research-reveals-major-risks
[14] npj Digital Medicine. "A Scoping Review of LLM Applications in Mental Health Care." Nature, 2025. https://www.nature.com/articles/s41746-025-01611-4
[15] Reuters. "AI Fails to Detect Depression Signs in Social Media Posts by Black Americans." 2024. https://www.reuters.com/business/healthcare-pharmaceuticals/ai-fails-detect-depression-signs-social-media-posts-by-black-americans-study-2024-03-28/
[16] Huckvale, K., et al. "Potential and Pitfalls of Mobile Mental Health Apps." Journal of Medical Internet Research, 2022. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9505389/
If you would like to learn more about applying this architecture in a more detailed and practical manner, please contact me to arrange a one-on-one appointment.
Rates and appointments: parallelreality.art/consulting