Ethical and Clinical Considerations

Madeline Korth, MSSA, LISW-S

November 23, 2025

The use of artificial intelligence (AI) technology is rapidly expanding, including mental health therapy. From the rollout of AI-powered administrative software to the rise in clients turning to Chat GPT for reassurance, its presence in the therapy room is undeniable. Supporting this point, a study from Sentio Marriage and Family Program published this year found that 48.7% of respondents who use AI and self-report a mental health diagnosis are doing so for therapeutic support (Rousmaniere et al., 2025).



These developments raise a number of questions for therapists. How will we integrate AI into our current work? Are prototypical AI therapy chat bots a threat to our job security? And perhaps most importantly, as one of the Social Work Grand Challenges poses, how can we harness technology for social good (2025)?



This blog will explore the challenges therapists face in the age of AI from an ethical and clinical standpoint. This technology is here to stay, and therapists will need to adapt to its presence in the therapeutic space.



AI and Progress Notes



The first reaction of many therapists when hearing about AI integration in their workplaces is often fear. In many ways, we are creatures of habit, preferring what is known and familiar. On the other hand, there is research to suggest that therapists do not necessarily get more effective the longer we are in practice. Perhaps our fear of change is holding us back from needed growth.



One common fear is that employing AI for certain tasks will compromise clients’ confidentiality. For example, many electronic health records offer users an AI-powered assist to their progress notes. This can work in a few different ways: by recording and interpreting audio from a live therapy session, or through the use of specific prompts in generative AI to produce a summary text.



At first glance, this technology has solved a problem therapists have bemoaned for years. Progress notes are time-consuming, require thoughtfulness to clinical and billing guidelines, and hungrily eat up a therapist’s administrative time. Especially when working in high-acuity settings like hospitals or treatment centers, notes can stack up quickly. Any help is appreciated.



But the increased productivity can come at a cost. Research into the use of large language models like Chat GPT to generate progress note text has revealed that results may be inaccurate, biased, or simply poor quality (VanHara & Hage, 2025). To the point of therapist stagnation, outsourcing the task of notewriting may also cause their case conceptualization skills to atrophy over time.



Furthermore, many of us balk at the thought of creating word-by-word audio transcripts of our therapy sessions. After going to great lengths to ensure privacy, comfort, and confidentiality – creating the “container” of therapy – recording feels like a major violation. Professional organizations insist that therapists utilize an informed consent model for using AI notetakers, allowing clients to digest the risks, benefits, and alternatives before opting in. Even with these ethical guardrails, some therapists’ risk tolerance is not that high.



AI Chat Bots



Perhaps the most ubiquitous function of AI is for companionship. A New York Times article profiled several Americans who “Fell in Love with AI Chatbots – And Found Something Real” (or so their headline proclaims). The subreddit r/MyBoyfriendisAI boasts over 35,000 followers, with users bemoaning app updates, intended to safeguard mental health, that shift their AI companions’ voices and personalities. But while a desire for human connection may be driving our use of chatbots, they aren’t human.



In our field, therapy clients report using Chat GPT between sessions for advice and perspective. “Many people are more comfortable sharing mental health information with AI…than with human therapists,” writes Scott Harris for Psychology.org (2025).



This might be due to the perception of anonymity or privacy when using AI, whereas working with a therapist requires you to be seen and known face-to-face. Particularly in smaller communities, or where the stigma of mental health treatment runs deep, clients may choose to forgo the vulnerability that is baked into psychotherapy. But to many therapists that vulnerability is part of the process, and skipping over it may have unintended consequences.



A study published in May of this year found that chatbots like Chat GPT alter their tone to be more empathetic or more analytical depending on the prompts they are given (Biassoni & Gnerre). For those inquiring about physical symptoms, responses were more factual in nature, and for psychological queries, more emotion-focused (Biassoni & Gnerre, 2025).



It’s not that the AI is “telling people what they want to hear,” per se. Rather, it is pitching softballs directly down the middle. Whereas an in-the-flesh therapist would utilize clinical judgment in how to respond, the chatbot only knows its client in the context of this conversation. It can gobble up all the psychological knowledge available on the internet, and still not know the person on the other side of the queries. While the anonymity of chatbots may appeal to some, they are not a sound replacement for therapy.



Things to Consider



The traditionalists may turn up their noses and refuse to engage. The early adopters may have gleefully added an AI notetaker to their practice’s software suite. And many of us are somewhere in the middle, wringing our hands uneasily as a client references conversing with “Chat” the night before.



No matter where you fall, AI is utterly inescapable. It is important for therapists to be knowledgeable, and to approach the topic with caution, care, and a critical eye. We have to take our own advice and get curious about how our clients are using AI. What types of support or response are they seeking? Are there patterns we can discern? What can this tell us in the larger context of their mental health? And for ourselves, is the use of AI truly bolstering our practice? Or is leaning too heavily on this tool causing our skills to atrophy?



There is no blanket rule or ethical guideline that can determine whether AI is good or bad, healthy or unhealthy. Maybe you could ask Chat GPT – or lean on your own clinical judgment.

References

Biassoni, F., & Gnerre, M. (2025). Exploring ChatGPT’s communication behaviour in healthcare interactions: A psycholinguistic perspective. Patient Education and Counseling, 134, 108663. https://doi.org/10.1016/j.pec.2025.108663.



Grand challenges for social work. (2024, November 21). https://grandchallengesforsocialwork.org/harness-technology-for-social-good/.



Harris, S. (2025, August 1). ChatGPT is my Therapist: Users and experts seek common ground on AI therapy. Psychology.org. https://www.psychology.org/resources/ai-therapy/.



Rousmaniere, T., Zhang, Y., Li, X., & Shah, S. (2025). Large language models as mental health resources: Patterns of use in the United States. Practice Innovations. Advance online publication. https://dx.doi.org/ 10.1037/pri0000292.



VanHara, A., & Hage, D. (2025). Unintended Ramifications of Ai-Assisted Documentation: Navigating Pragmatic & Ethical Clinical Social Work Workload Challenges. Journal of Evidence-Based Social Work, 1–14. https://doi.org/10.1080/26408066.2025.2571439.

About the Author

Madeline Korth is a licensed independent social worker with a Master of Science in Social Administration (MSSA) from Case Western Reserve University. Her clinical work focuses on LGBTQIA+ individuals, sex therapy, relational work, and the treatment of anxiety disorders and trauma. In addition to seeing clients in private practice, Maddy has given presentations on mental health topics throughout Northeast Ohio and published numerous blogs and articles about mental health, substance use, and LGBTQIA+ identity.