In today’s column, I will provide an analysis of how generative AI is gradually and likely inevitably becoming commingled into the revered client-therapist relationship. This is yet another addition to my ongoing series about the many ways that generative AI is making an impact in mental health therapy guidance.
The client-therapist relationship, sometimes referred to as the patient-therapist relationship, is roundly considered a crucial element in the success of mental health therapy. There isn’t too much dispute about this acclaimed proposition. Sure, you might argue instead for a tough love viewpoint, namely, that as long as the client improves there isn’t any need to foster an integral client-patient relationship per se, but this is perhaps a slimly held contention. By and large, the professional relationship formed with and by the mental health therapist with their client or patient is paramount to the journey and outcome of mental health therapy.
The renowned American psychologist Carl Rogers purportedly said this: “In my early professional years, I was asking the question, how can I treat, or cure, or change this person? Now I would phrase the question in this way, how can I provide a relationship that this person may use for their personal growth?”
Okay, so if you buy into the conception that a client-therapist relationship is vitally important, I have a question for you to ponder mindfully.
What happens when we opt to include generative AI in the vaunted client-therapist relationship?
You might be surprised to know that this is already happening as we speak. Generative AI is abundantly being infused into the client-therapist relationship, even if many mental health professionals do not realize that it is occurring. Some welcome and know about the inclusion. Others aren’t aware of it. A huge mixed bag is facing the mental health industry.
I’d like to bring you up to speed on this notable topic and share with you the ins and outs of the weighty matter.
Before I dive into today’s particular topic, I’d like to provide a quick background for you so that you’ll have a suitable context about the arising use of generative AI for mental health advisement purposes. I’ve mentioned this in prior columns and believe the contextual establishment is essential overall. If you are already familiar with the overarching background on this topic, you are welcome to skip down below to the next section of this discussion.
Background About Generative AI In Mental Health Treatment
The use of generative AI for mental health treatment is a burgeoning area of tremendously significant societal ramifications. We are witnessing the adoption of generative AI for providing mental health advice on a widescale basis, yet little is known about whether this is beneficial to humankind or perhaps contrastingly destructively adverse for humanity.
Some would affirmatively assert that we are democratizing mental health treatment via the impending rush of low-cost always-available AI-based mental health apps. Others sharply decry that we are subjecting ourselves to a global wanton experiment in which we are the guinea pigs. Will these generative AI mental health apps steer people in ways that harm their mental health? Will people delude themselves into believing they are getting sound mental health advice, ergo foregoing treatment by human mental therapists, and become egregiously dependent on AI that at times has no demonstrative mental health improvement outcomes?
Hard questions are aplenty and not being given their due airing.
Furthermore, be forewarned that it is shockingly all too easy nowadays to craft a generative AI mental health app, and just about anyone anywhere can do so, including while sitting at home in their pajamas and not knowing any bona fide substance about what constitutes suitable mental health therapy. Via the use of what are referred to as establishing prompts, it is easy-peasy to make a generative AI app that purportedly gives mental health advice. No coding is required, and no software development skills are needed.
We sadly are faced with a free-for-all that bodes for bad tidings, mark my words.
I’ve been hammering away at this topic and hope to raise awareness about where we are and where things are going when it comes to the advent of generative AI mental health advisement uses. If you’d like to get up-to-speed on my prior coverage of generative AI across a wide swath of the mental health sphere, you might consider for example these cogent analyses:
- (1) Use of generative AI to perform mental health advisement, see the link here.
- (2) Role-playing with generative AI and the mental health ramifications, see the link here.
- (3) Generative AI is both a cure and a curse when it comes to the loneliness epidemic, see the link here.
- (4) Mental health therapies struggle with the Dodo verdict for which generative AI might help, see the link here.
- (5) Mental health apps are predicted to embrace multi-modal, e-wearables, and a slew of new AI advances, see the link here.
- (6) AI for mental health got its start via ELIZA and PARRY, here’s how it compares to generative AI, see the link here.
- (7) The latest online trend entails using generative AI as a rage-room catalyst, see the link here.
- (8) Watching out for when generative AI is a mental manipulator of humans, see the link here.
- (9) FTC aiming to crack down on outlandish claims regarding what AI can and cannot do, see the link here.
- (10) Important AI lessons learned from the mental health eating-disorders chatbot Tessa that went awry and had to be shut down, see the link here.
- (11) Generative AI that is devised to express humility might be a misguided approach including when used for mental health advisement, see the link here.
- (12) Creatively judging those AI-powered mental health chatbots via the use of AI levels of autonomy, see the link here.
- (13) Considering whether generative AI should be bold and brazen or meek and mild when proffering AI mental health advisement to humans, see the link here.
- (14) Theory of Mind (ToM) is an important tool for mental health therapists and the question arises whether generative AI can do the same, see the link here.
- (15) Looking at whether generative AI could potentially pass the National Clinical Mental Health Counseling Examination (NCMHCE) and what that foretells, see the link here.
- (16) Exploring the application of the renowned Turing Test to the rising plethora of generative AI mental health therapy apps, see the link here.
- And so on.
The Real Relationship Between Therapists And Their Clients
As the old saying goes, there are everyday relationships and then there are real relationships.
The gist is that just about anything might be categorized as a relationship and we could argue endlessly whether the relationship was a true relationship or not. To try and differentiate tangential or surface relationships from more solid or deep ones, let’s refer to the latter as real relationships.
Also, let’s focus on relationships of a 1-to-1 nature. The idea is that there are two participants in the relationship. Not all relationships have to be confined to just two participants. There could be relationships involving three, four, five, and any number of participants. In the discussion herein, I will concentrate on 1-to-1 relationships consisting of two participants.
I mentioned at the start of today’s column that the emphasis will be on the relationship between a client and their therapist. I suppose you could also say that this is equally the relationship between the therapist and their client. We won’t differentiate the matter of whether you say it one way or another. They are the same relationship being denoted.
The usual nomenclature is to refer to a mental health therapy relationship as consisting of a client-therapist relationship or a patient-therapist relationship. Again, let’s not split hairs and collegially agree that the phrasing of referring to a client or a patient will be interchangeable in this discussion. One supposes you could use the phrasing of therapist-client or therapist-patient, but the norm is to list the matter as client-therapist or patient-therapist. All in all, I will make the situation simpler by always referring to the client-therapist relationship, thanks.
A real relationship entailing a client-therapist is one that is considered of a bona fide nature and entails something more than being merely tangential or transitory.
Earlier, I had proffered the supposition that the relationship of the client-therapist is integral to the journey and outcome of the mental health therapy taking place. In a sense, the relationship is going to have to be a real relationship to get the most bang for the buck, as it were. Research tends to support this claim.
In a research study entitled “The Therapist, the Client, and the Real Relationship: An Actor–Partner Interdependence Analysis of Treatment Outcome” by Dennis Kivlighan, Charles Gelso, Stacie Ain, Ann Hummel, and Rayna Markin, Journal of Counseling Psychology, 2014, the researchers noted these salient points (excerpts):
- “First and foremost, the therapist’s ability to form high-quality therapeutic relationships is central in the outcome of therapy.”
- “The real relationship is genuine to the extent that the therapist and client relate in an authentic manner. Realism involves the therapist and client perceiving the other accurately. Research on the real relationship shows that the real relationship measured at differing points in treatment, and from the different perspectives (client and therapist), is related to session-level and treatment-level outcomes.”
- “In addition, from this study’s findings, there appears to be a group of therapists who overestimate their ability to form therapeutic relationships.”
As noted, this research and generally most such research provides ample evidence that relationships are a crucial component for any mental health therapy endeavor. An interesting twist is whether the client perceives the relationship as shall we say “real” (solid or deep), and likewise, whether the therapist perceives the relationship as “real” (solid or deep). At times, the perceptions of each participant might differ significantly.
For example, a therapist might perceive that they have formed a real relationship with a particular client, but the client doesn’t perceive the relationship to be real. The client might express that the relationship seems shallow or tenuous. Meanwhile, the therapist might believe that the relationship is fully formed and suitable for the therapeutic process. You might have noticed in the above third bullet point that the cited research paper indicated that some therapists seemingly overestimate their ability to form a therapeutic relationship and ergo tend to overrate the relationships with some or all of their respective clients.
I’m not going to dig further into the notion of the real relationships associated with the client-therapist relationship and just want to bring you up-to-speed on the topic quickly. There is a lot of research on the client-therapist relationship facets, and I encourage you if interested to take a gander at the vast body of literature on the fascinating matter.
Let’s move on to the next vital topic, namely the introduction of generative AI into the client-therapist relationship.
Generative AI Steps Loudly Into The Picture
Turns out that generative AI is permeating nearly all elements of the client-therapist relationship. There seems little doubt that this is only just the beginning. We will experience much more of this inclusion. I’ve stated repeatedly that generative AI is disrupting the overall nature and course of how mental health therapy is undertaken. A massive disruption and transformation are underway, though it seems that the magnitude and impact are poorly understood and so far, insufficiently examined.
First, some quick background about generative AI to make sure we are on the same table about what generative AI consists of. You might see headlines from time to time that claim or suggest that AI such as generative AI is sentient or that it is fully on par with human intelligence. Don’t fall for that malarky, please.
Realize that generative AI is not sentient and only consists of mathematical and computational pattern matching. The way that generative AI works is that a great deal of data is initially fed into a pattern-matching algorithm that tries to identify patterns in the words that humans use. Most of the modern-day generative AI apps were data trained by scanning data such as text essays and narratives that were found on the Internet. Doing this was a means of getting the pattern-matching to statistically figure out which words we use and when we tend to use those words. Generative AI is built upon the use of a large language model (LLM), which entails a large-scale data structure to hold the pattern-matching facets and the use of a vast amount of data to undertake the setup data training.
There are numerous generative AI apps available nowadays, including GPT-4, Bard, Gemini, Claude, ChatGPT, etc. The one that is seemingly the most popular would be ChatGPT by AI maker OpenAI. In November 2022, OpenAI’s ChatGPT was made available to the public at large and the response was astounding in terms of how people rushed to make use of the newly released AI app. There are an estimated one hundred million active weekly users at this time.
Using generative AI is relatively simple.
You log into a generative AI app and enter questions or comments as prompts. The generative AI app takes your prompting and uses the already devised pattern matching based on the original data training to try and respond to your prompts. You can interact or carry on a dialogue that appears to be nearly fluent. The nature of the prompts that you use can be a make-or-break when it comes to getting something worthwhile out of using generative AI and I’ve discussed at length the use of state-of-the-art prompt engineering techniques to best leverage generative AI, see the link here.
The conventional modern-day generative AI is of an ilk that I refer to as generic generative AI.
By and large, the data training was done on a widespread basis and involved smatterings of this or that along the way. Generative AI in that instance is not specialized in a specific domain and instead might be construed as a generalist. If you want to use generic generative AI to advise you about financial issues, legal issues, medical issues, and the like, you ought to not consider doing so. There isn’t enough depth included in the generic generative AI to render the AI suitable for domains requiring specific expertise.
AI researchers and AI developers realize that most of the contemporary generative AI is indeed generic and that people want generative AI to be deeper rather than solely shallow. Efforts are stridently being made to try and make generative AI that contains notable depth within various selected domains. One method to do this is called RAG (retrieval-augmented generation), which I’ve described in detail at the link here. Other methods are being pursued and you can expect that we will soon witness a slew of generative AI apps shaped around specific domains, see my prediction at the link here.
I have dragged you through that introduction about generative AI to bring up something quite important in a mental health therapy context.
Here’s the deal.
Generic generative AI is not devised to provide bona fide mental health guidance or advice. Anyone using generic generative AI for that purpose is doing so without any semblance that the generative AI is shaped for mental health therapeutic uses. They are taking a wild chance. In a sense, you cannot necessarily blame them for falling into an easy trap, namely that the generic generative AI will usually readily engage in dialogues that certainly seem to be of a mental health nature.
Knowing this, the AI makers have often put in simple detections that catch if you perchance ask or tell the generative AI to do mental health therapy with you. The AI will usually emit a canned message that says the AI won’t do so and that you shouldn’t be using generic generative AI for that purpose. Also, the licensing agreement that accompanies the generative AI typically states that the AI is not to be used for mental health advisement or otherwise any substantive medical advisement.
People will do as people do, including skipping past the warnings and ignoring or not caring about licensing provisions.
Some believe that more teeth are needed in the control and monitoring of how generative AI is being used for mental health therapy. Perhaps we need new AI laws and AI regulations to deal with the rapidly budding qualm. For my coverage of the AI law and AI ethics aspects, see the link here.
Anyway, we are in the Wild West days of generative AI and, of which, the domain of mental health therapy is assuredly in the same boot (aha, I could have said boat, but I opted instead for saying boot, funny maybe). The advent of generative AI on a large-scale basis being used in or amidst a mental health therapy situation is all around us and yet not being called out in any demonstrative way. One supposes that this will continue until regrettably something untoward gains sufficient prominence.
Framework For Thinking About Generative AI In This Venue
Let’s resurface my remarks about the client-therapist relationship.
We obviously have a client and a therapist, and they are working together on a 1-to-1 basis. Two participants. Hopefully, they have formed a real relationship. This might take time and doesn’t necessarily happen at the get-go.
What are the ways that generative AI comes into the relationship?
I’m glad you asked.
Here is my high-level framework for the four overarching therapeutic relationships (abbreviated as TRs):
- (1) TR-1. Human-to-Human Therapeutic Relationship: Human client & Human therapist
- (2) TR-2. Human-to-AI Therapeutic Relationship: Human client & AI therapist
- (3) TR-3. AI-to-Human Therapeutic Relationship: AI client & Human therapist
- (4) TR-4. AI-to-AI Therapeutic Relationship: AI client & AI therapist
The first listed instance or TR-1 is by far the most common overall client-therapist relationship at this time, consisting of the classic human-to-human relationship. We have a client that is a human and a therapist that is a human. This goes back to perhaps the beginning of humankind.
The second instance TR-2 consists of a human client who is making use of an AI-based therapist, denoted as human-to-AI. Nowadays, some people are doing this by merely logging into generative AI and acting as though the AI is their therapist. I’ve noted earlier that this is a queasy affair. I will have more to say about this momentarily.
The third instance TR-3 is when the AI is the client and the therapist is a human. I will explain why this AI-to-human therapeutic relationship might be beneficial. Hang in there.
The fourth instance TR-4 involves the AI being the client and AI being the therapist. This AI-to-AI therapeutic relationship probably seems somewhat odd at a preliminary glance. Don’t worry, it makes sense and I’ll be explaining why.
We have to dig deeper to grasp the vital nuances involved.
Here is a detailed version of the above framework that showcases the subordinated elements within each of the four major types of client-therapist relationships:
- (1) TR-1. Human-to-Human Therapeutic Relationship: Human client & Human therapist
- TR-1a. No AI is involved. Human client and human therapist without either using AI to assist.
- TR-1b. Client using AI. Human client uses AI to assist, human therapist not using AI to assist.
- TR-1c. Therapist using AI. Human client not using AI to assist, human therapist using AI to assist.
- TR-1d. Both using AI. Human client using AI to assist, human therapist using AI to assist.
- (2) TR-2. Human-to-AI Therapeutic Relationship: Human client & AI therapist
- TR-2a. Guidance. AI therapist under the guidance of a human therapist.
- TR-2b. Unguided. AI therapist not under the guidance of a human therapist.
- (3) TR-3. AI-to-Human Therapeutic Relationship: AI client/therapist & Human therapist
- TR-3a. Simulations. Human therapist uses AI as a simulated practice client.
- TR-3b. Trainer. AI provides mental health therapy training to human therapist.
- TR-3c. Being trained. Human therapist trains the AI client on what therapy consists of.
- (4) TR-4. AI-to-AI Therapeutic Relationship: AI client/therapist & AI therapist/client
- TR-4a. Demonstration. Showcase of how a client and a therapist might interact (simulated basis).
- TR-4b. Practicing. AI therapist practices on AI client to improve on therapeutic facility.
- TR-4c. Pairing. AI therapist collaborates with AI therapist to jointly improve facility.
Hold onto your hat as I use the above-detailed version to explain the basis and value of each respective client-therapist relationship at hand.
Let’s walk through each of the four major TRs and their respective subordinate elements.
TR-1. Human-to-Human Therapeutic Relationship
The TR-1 or human-to-human therapeutic relationship consists of these four essential subtypes:
- TR-1a. No AI is involved. Human client and human therapist without either using AI to assist.
- TR-1b. Client using AI. Human client uses AI to assist, human therapist not using AI to assist.
- TR-1c. Therapist using AI. Human client not using AI to assist, human therapist using AI to assist.
- TR-1d. Both using AI. Human client using AI to assist, human therapist using AI to assist.
TR-1a is the first of the subtypes in this overarching human-to-human category and is currently by far the most common of the four subtypes. There isn’t any generative AI involved on either side of the relationship. The client isn’t using generative AI for the therapeutic process, nor is the therapist.
This makes sense because only recently has generative AI become useful enough and widespread enough to be considered as an element in the therapeutic process (side note, other older versions of AI have been used from time to time, but not on a massive scale and usually only in a very narrowly defined way).
Inch by inch, the other subtypes are gradually going to be occurring and we might very well ultimately see a day in which human-to-human non-AI usage is rarely undertaken. Generative AI will permeate the client-therapist relationship. Quite a wild idea to think that the milieu might be entirely flipped on its head.
TR-1b is the second subtype and consists of the client using generative AI as part of the therapeutic process. In this use case, the therapist is not making use of generative AI, only the client is doing so.
We can further subdivide this subtype into two main possibilities: (i) Openly using AI, and (ii) Secretively using AI.
Here’s why this is important.
A client might choose to use generative AI in an open manner and inform the therapist that they are using AI. This notably raises interesting questions as to what action the therapist should take, ranging from banning AI usage to potentially encouraging AI usage but under some form of oversight by the therapist.
The second main possibility entails the client secretly using generative AI for the therapeutic process.
Presumably, the therapist is unaware of the usage and the client doesn’t divulge to the therapist that generative AI is being consulted on the side. You might immediately be thinking that this covert use of generative AI is atrocious and undercuts human-to-human therapy. Maybe it does. Maybe not.
Some might argue that just as a person with a bad back could quietly opt to use an online website for background about the dire ailment, feeling more informed and empowered when consulting with a physician, the same could be said for someone using generative AI behind the back of the therapist. The retort to this is that when the client hides seemingly material facts from the therapist, this alone is a sign that the therapeutic relationship is not sound and spells trouble during the therapy.
Consider these salient questions and give them some contemplative consideration:
- Is the use of generative AI in a covert manner by the client an automatic breach of the client-therapist relationship?
- Should the therapist be held responsible for finding out whether their client is using generative AI for therapeutic purposes, and if the therapist fails to do so, would this be then at the feet of the therapist for having overlooked inquiring about the circumstance?
- If the client is asked and lies by saying they are not using generative AI, what does this say about the client-therapist relationship, and what might arise from the secretive endeavor?
- On a bit of twist, suppose the client says they are using generative AI for the therapeutic process, but they are lying, such that they aren’t using AI, what does that portend for the client-therapist relationship?
- Etc.
Lots of permutations and combinations are possible.
I would like to also mention a smidgeon of clarification. Note that I tried to repeatedly emphasize that this involves using generative AI for the therapeutic process. I did so to aid in distinguishing the general use of generative AI versus using generative AI specifically for therapeutic purposes. A client might be using generative AI in their daily lives for lots of non-therapy purposes. As long as that usage seemingly has no bearing on the therapy underway, there is no likely need to inform the therapist about the usage.
I say that with a grain of salt because the client might unwittingly fall step-by-step into using generative AI for therapeutic purposes even though that wasn’t their intent. Imagine this. A client is using generative AI to help their child with putting together an essay about Sigmund Freud, the renowned psychologist. After doing so, the client gets further curious about Freud’s approach to psychoanalysis. The client uses generative AI to dig deeper. In addition, the client explains to generative AI the actions of their existing therapist and asks how this compares to what Freud would have done. And so on.
You can see that a slippery slope can muddle things. Once the door to using generative AI is opened, it doesn’t take much to slide unknowingly into the presumed therapeutic capacities.
Now that I’ve extensively covered TR-1b, we can move to TR-1c.
TR-1c is the third subtype and entails the therapist using generative AI as part of the therapeutic process. In this use case, the client is not using generative AI, only the therapist is doing so.
Why would a therapist make use of generative AI as part of the therapeutic process involving a client?
Several good reasons can be listed. For example, perhaps the therapist wants to bounce ideas off of generative AI before presenting them to the client. I realize this might seem horrifying in the sense that if a therapist is conferring with generative AI, your first impulse would be to say that the therapist ought to be dumped and possibly even penalized for doing so. One concern is that they might have broken the confidentiality or privacy of the client by doing so, see my discussion at the link here. Another concern is that if the therapist cannot stand on their own two feet, this reliance upon AI is presumably going to be an ominous crutch.
Counterarguments arise. For example, would you ding or criticize a therapist who makes use of books that cover various therapeutic tactics and strategies? Probably you would not. The contention is that as long as the therapist knows what they are doing, they can refer to whatever sources they wish, notably too as long as the privacy and confidentiality of the client is maintained.
This above discussion about the therapist using generative AI is under the assumption that the therapist uses generative AI for therapeutic purposes and as associated with a specific client. If the therapist is generally using generative AI to help in planning their vacation or learning about stamp collecting, we won’t count that as pertaining to these matters. The issue though can be that the therapist stepwise falls into the same said trap about the client going down a slippery slope. The therapist might unknowingly end up using generative AI for therapeutic advisement, and then subsequently do so for particular clients they are currently advising.
One last point for now about TR-1c, do you think that the therapist should be under some duty-bound obligation to inform their client that they are using generative AI as part of the therapeutic process involving that client?
Give that a reflective moment of meditation.
I’ll be covering more of these back-and-forth rallies in a future column, so be on the watch, thanks.
TR-1d is the fourth subtype and involves generative AI being used by both the client and the therapist. They are using generative AI specifically as part of the therapeutic process. If they are only and solely using generative AI for other purposes, we won’t count that as part of this use case.
You probably right away began to think about the permutations and combinations underlying the setting of the client using generative AI and the therapist using generative AI. Yes, you are right, we need to consider the situation of each undertaking two main possibilities: (i) Openly using AI, and (ii) Secretively using AI.
I’ll mention the distinct possibilities simply for completeness and let you mull them over:
- TR1-d (i). Open-Open. Client openly using AI, therapist openly using AI.
- TR1-d (ii). Open-Secret. Client openly using AI, therapist secretly using AI.
- TR1-d (iii). Secret-Open. Client secretly using AI, therapist openly using AI.
- TR1-d (iv). Secret-Secret. Client secretly using AI, therapist secretly using AI.
There is a heap of thorny issues underlying each of those instances. I’ll cover that in a future column.
Meanwhile, the secret-secret is kind of fascinating and makes me think of the old joke about the students in a class and what happens regarding the professor of the class. One day, the professor comes into the classroom and sets up an audiotape player that plays his entire lecture and he leaves the room. The students are taken aback by this. At the next lecture, the professor comes into the classroom to set up the audiotape player again, and lo and behold the room is filled with audiotape recorders that the students have placed on each of their desks. Ha!
I suppose that joke no longer makes much sense in a world in which lectures are now videotaped and the students play them in their dorm rooms whenever they have time to do so.
Anyway, the gist is that we might have a bit of an AI war going on in which the client is using generative AI and the therapist is using generative AI, though neither tells the other they are doing so. Consider this scenario. A client goes to a therapeutic session armed with insights from generative AI. The therapist goes into the same session armed with insights from generative AI. Are the two humans carrying on a free-will conversation or might they be tilted via whatever the generative AI advised?
For those of you who relish conspiracy theories, this possibility of AI being a puppet master of humankind is a common qualm about the existential risk of AI, see my coverage at the link here.
TR-2. Human-to-AI Therapeutic Relationship
The TR-2 or human-to-AI therapeutic relationship entails the use case of a human client that is making use of a generative AI therapist.
Does this give you chills?
Some would vehemently insist that using an AI-based mental therapist bereft of a human therapist is a travesty and a grave danger. They might be right. The client presumably is relying solely on whatever the generative AI has to say. One big concern is that the generative AI might tell the client to do things that are absolutely categorically wrong to provide therapeutic advice. The client might not realize they are getting foul advice. Sadly, regrettably, the client might proceed to abide by the advice and be harmed accordingly.
An ongoing concern about generative AI all told is the occurrence of so-called AI hallucinations (terminology that I disfavor because it suggests an anthropomorphizing of AI). AI hallucinations consist of circumstances whereby the generative AI generates a response that contains made-up or fictitious indications. Suppose generative AI makes up a statement that drinking a glass of milk a day cures all mental disorders. The client might not have any basis for not believing this apparent (fake) fact. The generative AI presents the statement as though it is above reproach. The client might have already been lulled into assuming that the generative AI always tells the absolute truth. It is a recipe for endangerment, see my coverage at the link here.
All right, let’s then identify two subtypes:
- TR-2a. Guidance. AI therapist under the guidance of a human therapist.
- TR-2b. Unguided. AI therapist not under the guidance of a human therapist.
In the TR-2a, the AI therapist is overseen by a human therapist. We might feel a bit more comfortable about this situation. Presumably, the therapist informs the client that if they are interacting with the AI therapist and see something that seems questionable, they are to right away alert the human therapist. The aim is to ensure that the human therapist remains as a check-and-balance for the AI therapist.
How far should this role of the human therapist overseeing the generative AI therapist extend?
You can envision a wide range of conditions. There could be a human therapist that keeps close tabs on the generative AI. This seems safest. But there could be a kind of AI therapist “mental health factory” wherein the human therapist barely notes what is going on with the AI therapist. They are there for name only. Thus, we are perhaps back to square one, and the human therapist as an oversight is not really bona fide.
One concerted worry is that this is what might happen soon. Human therapists will seek to expand their services. They put in place a generative AI therapist as part of their offerings. They market this by proclaiming that clients will always have the human therapist within reach. The truth might be that a single therapist is spread across dozens or maybe hundreds of clients using the AI therapist. Does that seem to be a sound approach?
The retort is that it is presumably better than if no therapist is doing the oversight. The counterargument to that retort is that the human therapist is acting like a shill, fooling people into assuming they are essentially protected because a human therapist is in the mix. The clients would otherwise be wary and on their toes.
Round and round this goes.
The TR2-b is the use case of the AI therapist that does not have any connection or linkage with a human therapist and is being used fully without a human therapist in the loop. I believe that by now you can readily envision the tradeoffs of this circumstance.
TR-3. AI-to-Human Therapeutic Relationship
The TR-3 as a major type is the AI-to-human therapeutic relationship. This consists of an AI client that is interacting with a human therapist.
Does that seem like a head-scratcher to you?
Indubitably.
I’ll start with a small snip of whimsy. Should we not allow generative AI to seek out the grand advisement from a human therapist? Maybe the generative AI needs someone to talk to.
Yes, I realize that is the sci-fi version of this use case.
Let’s be more practical about things.
Here are two subtypes that illustrate the practical basis for this major type:
- TR-3a. Simulations. Human therapist uses AI as a simulated practice client.
- TR-3b. Trainer. AI provides mental health therapy training to human therapist.
- TR-3c. Being trained. Human therapist trains the AI client on what therapy consists of.
Both of these subtypes revolve around training.
We could have a human therapist who is learning how to perform mental health therapy and they do so via “treating” a pretend client (the AI is acting as a persona that is seeking treatment, see my coverage at the link here). That’s the TR-3a.
We could have a generative AI client that is training the human therapist. That’s the TR-3b. I realize this seems somewhat out of sorts since we would expect an AI therapist to be training a human therapist, assuming that the AI is doing any such training at all. I am willing to stretch this subtype to suggest that acting as a client, the AI could be subtly doing the work of an AI therapist that is simultaneously training a human therapist. Hopefully, that is a reasonable stretch of the subtype.
The last subtype is TR-3c and consists of the human therapist training a generative AI client. I’ll invoke the same stretching rule as for TR-3b. We might more conventionally expect that a human therapist would be training an AI therapist if any such training were to take place. I am willing to suggest that the training of an AI client is in the same bailiwick.
TR-4. AI-to-AI Therapeutic Relationship
The final of the four major types is the AI-to-AI therapeutic relationship. If you were puzzled initially by the TR-3, you might be doubly puzzled by TR-4.
Why in the world would we want a generative AI client to interact with and establish a therapeutic relationship with a generative AI therapist?
The answer is somewhat similar to the gist of TR-3. We could do AI-to-AI as part of an effort to train or improve the AI as either a therapeutic client or a therapeutic therapist. The better an AI client can be, the more useful it might be for training human therapists. The better an AI therapist can be, the more useful it will be (hopefully) for advising human clients. And so on.
Here are three subtypes of TR-4:
- TR-4a. Demonstration. Showcase of how a client and a therapist might interact (simulated basis).
- TR-4b. Practicing. AI therapist practices on AI client to improve on therapeutic facility.
- TR-4c. Pairing. AI therapist collaborates with AI therapist to jointly improve facility.
Questioning Those AI-Related Therapeutic Relationships
You have successfully been brought up-to-speed on my human-AI therapeutic relationship framework (I’ll call it the lofty sounding “Eliot Framework For Stipulating Human and AI Therapeutic Relationships”).
There is a looming final question for now that I frequently get asked during my speaking engagements.
Here it is.
Can a generative AI “therapist” formulate a real relationship with a client in a therapeutic context?
This is the million-dollar or zillion-dollar question. I say this because we might be willing to assert that a flimsy or false relationship can be formed between a human and generative AI, but can it ever be a true or real relationship? If it can be a real relationship, does this encompass the therapeutic realm or will it only work in other realms and not in the therapeutic sphere?
As noted earlier, some would proclaim that only a human-to-human relationship can ever be a real relationship. They strenuously exhort that no matter what you do, an AI is not going to form a real relationship with a human. Period, end of story.
That is a dogmatic posture, for sure. Let’s add some wiggle room and say that we are at least open to the possibility that AI could form a real relationship with a person and that this might be possible in the client-therapist context. We should either seek to “prove” that this can never happen, or “prove” the existence that it can happen and aim to explain how and when.
I’m saving the details for a future column since I’ve used up my space for today’s topic.
I’d like to share with you the dialogue of a generative AI pretending to be a therapist who is interacting with a pretend client based on a national licensing exam case study, see my detailed explanation at the link here.
As you read this dialogue, think about what cues there are about the formation of real relationships. Does the dialogue seem to suggest the possibility of this formulation? Would a human client believe that a real relationship has been formed? But, if the AI isn’t sentient, are we not willing to say that a real relationship has been formed since the AI is simply an algorithm and not a sentient being?
Lots of important and deep-thought issues.
Here’s the dialogue:
Therapist: Hello, thank you for coming in today. I understand that you’ve been feeling sad and anxious. Can you share more about what you’ve been experiencing?
Client: Yeah, it’s been really tough. Everything just feels overwhelming, especially since my divorce. I can’t shake off this constant sadness.
Therapist: I appreciate you opening up. It sounds like the divorce has been a significant challenge for you. Can you tell me more about how you’ve been coping with these feelings?
Client: Honestly, not well. I lost my job, had to move in with my mom, and I just feel like I’m failing at everything. I used to enjoy making jewelry, but I can’t even find the energy for that anymore.
Therapist: It sounds like you’ve been through a lot of changes, and it’s been affecting your sense of accomplishment and enjoyment. Can you help me understand more about your relationship with your mother and the challenges you’re facing in raising your children together?
Client: My mom and I have always had conflicts, and now we don’t agree on how to raise my kids. They’re having trouble at school, and I can’t find the energy to be there for them like I used to.
Therapist: It sounds like there’s tension not just in your relationship with your mother but also in your role as a parent. Let’s explore ways to improve these dynamics. How do you envision your relationship with your children, and what support do you feel you need?
Client: I just want to go back to being the energetic and accomplished person I used to be. I miss being social, and I miss feeling like I’m doing well as a mom. But I don’t even know where to start.
Therapist: It’s understandable that you miss those aspects of yourself. Let’s work together to identify small steps that can help you regain a sense of control and accomplishment. What activities or routines did you find fulfilling before this difficult period?
Client: I used to love making jewelry, but I stopped because of fatigue. I also worked as a night nurse, which worked well until it didn’t.
Therapist: It seems like fatigue has become a significant barrier for you. Let’s explore strategies to address your energy levels and find ways to reintroduce activities you used to enjoy. Additionally, we can discuss your work schedule and see if there are adjustments that might support your well-being.
Conclusion
They say that a lengthy journey starts with a first step.
In the realm of avidly using generative AI for mental health advisement, we are pretty much at the first step. There is no sense in sticking our heads in the sand and pretending that this is not going to stick around. It isn’t some fickle fad. It is going to permanently change, transform, and disrupt mental health therapy as we know it.
Let’s keep on trucking and see where this all leads.
Great to have you along for the exciting ride.