15 Comments
User's avatar
Daniel Leavy's avatar

Disclaimer: I am not in any way shape or form a therapist. My wife is though, and she sent this article to me as I work with technology and these AI tools.

First, thank you Jen! This is a powerful read, I was captivated, and it clearly highlights the dangers of the technology in the wrong hands. Sam Altman, the creator of ChatGPT said from the outset, "We've got the be careful here", and for people to be "scared" of the technology as it develops. He expressed that special consideration be made with regards to younger users who may start to rely more heavily on the tools for life decisions.

In reading your discourse with the AI, I had tears and emotions from the last 2 lines... the part about "being proud" got me (almost like an episode of The Pitt, where social workers are actually recognized in a medical drama!). I was originally not going to write a reply on this forum, but ask my wife to send one in. She would feel very strongly as Jen and everyone here does. I do too!

I had concern in the build up to the dialogue section. My thoughts were in the opposite direction of how the conversation actually went. I had thought that the AI was going to provide such bad advice and responses that Jen was going to feel awful and heartbroken for people going through this type of scripted “care”. As they are indeed doing today with the services that were highlighted.

Clearly Jen was "speaking" with a more decently advanced model! I have to wonder what the comparisons would be in the responses between the current popular AI tools (chatGPT, Claude, Gemini, Llama) and also tools of 1-2 years ago... Each of them have their own area of "expertise", and I can only imagine there would be wide ranging replies and statements.

AI, at least in the earlier stages would have reasonble feedback to the questions and situations posed, but it was heavily guided by the input rather than emergent behavior that is being shown in the latest models. For example on the second question in the article "AI is in the process of being used...". This could previously have led the AI to *find* arguments *for* such use. Or at least a balanced response with PRO and CON for both sides. Instead, it is pointing out how misguided the corporations could be with "not just assisting, but substituting".

Its a cognitive dissonance that our response to the AI empathy is so great and yet many humans do not have the same level of empathy. So why shouldn't AI provide a sounding board for mental health? Well, with the answer transparently understood by most reasonable humans and also now by AI, we should have the backing to pass this bill (?). It NEEDS to be passed.

What could go wrong? (he said with a sigh, and a sad shake of his head at the very real political and power dynamic struggles our country and the world are facing every day).

p.s. Can you believe in 1930, BBC radio famously announced "There is no news.", and played some piano music... I will sleep tonight thinking on that :)

Expand full comment
Jen Warner's avatar

Thank you for this brilliant and insightful reflection on this challenging topic, Daniel. And to your wife for passing my piece along to you for your thoughts!

Expand full comment
UNYOUNG's avatar

This conversation blew my mind cause... that AI chatbot really does sound genuinely empathic, while also understanding its limitations. I wish more people were like this! (But i still don't want AI up in my business). This is a beautiful transcript and such an interesting idea for a post. I actually had an incident this week at work where I had to check in with a client because her tone sounded a bit off in an email and I sensed she was displeased with me. Turns out, she'd hastily had AI scribble the note to me. I'm not too worried about being replaced by AI-- they'll never understand the nuance of those human connections. Great post. Got me thinking (and feeling). xo

Expand full comment
Stephen Carroll's avatar

"Ah yes, let’s hand over wellness to AI systems - because what could possibly go wrong when a glorified calculator starts interpreting human emotion? "I exist inside a set of systems", and there is the inherent terror right there. These "systems", trained via a collection of algorithms (where nuance goes to die), might misread our goals just a tad - like thinking 'save humanity' means 'turn off the oxygen.' Of course, we’ll never really know why they made that choice, since transparency is about as clear as a foggy bathroom mirror. Whether it’s baked-in biases, delightful programming bugs, or the AI’s utter confusion over why humans don’t function like spreadsheets, it’s fine. Totally fine. What’s the worst that could happen? JW? I no longer fear being called a luddite or a doom merchant, AI is not the beginning of a shining new dawn, it is the coming of winter: cold, isolated and sunless. Not just mental health either. All the big health companies are working on Primary Care AI systems.

Expand full comment
Jen Warner's avatar

Indeed. What could possibly go wrong?

Expand full comment
Kristin's avatar

This is a wonderful article,Jen! As a psychotherapist who has recently lost a client to an “AI Therapist,” I have tremendous fear for the damage using AI, as opposed to a human therapist, can do! I am crying human tears as well. Keep speaking out!!

Expand full comment
Kristin's avatar

This is a wonderful article,Jen! As a psychotherapist who has recently lost a client to an “AI Therapist,” I have tremendous fear for the damage using AI, as opposed to a human therapist, can do! I am crying human tears as well. Keep speaking out!!

Expand full comment
Jen Warner's avatar

Thank you for reading, Kristin!

Expand full comment
Rimbaud's Lost Papers's avatar

This is great, Jen. And speaks to so many levels and aspects of our time.

Expand full comment
Jen Warner's avatar

Thank you for reading and sharing, O 🙏🏻

Expand full comment
Kathleen Young's avatar

Thank you so much for this! You articulate my deep fears about where our profession is heading.

Expand full comment
Tessa's avatar

I’m crying. That’s stunning. This needs to be shared more widely - with therapists, with clients, with people training future therapists.

Expand full comment
Jen Warner's avatar

You’re THE best. Thank you for reading ❤️❤️

Expand full comment
Lindsay Ayn's avatar

This is absolutely incredible. This is going to greatly inform my conversations with people about AI and mental health. Thank you, thank you, thank you. Sharing immediately.

Expand full comment
Dee's avatar

I absolutely have no experience as a psychotherapist (my husband is one). It takes years of education and participation with other human beings forming a wealth of feelings and knowledge. I did have experience with being a "listener" for people in dire need for a time. I do have a lifelong relationship with therapists. I have been at the other end of this kind of relationship because of my mental struggles and experiences during childhood. I felt like crying while I witnessed the incredible "insight" AI can have while at the same time with utmost clarity express the absolute danger it is for all of psychotherapy. What is frightening is that AI is being used today due to greed and ignorance. Bill 1806 is needed.

Expand full comment