Discussion about this post

User's avatar
Daniel Leavy's avatar

Disclaimer: I am not in any way shape or form a therapist. My wife is though, and she sent this article to me as I work with technology and these AI tools.

First, thank you Jen! This is a powerful read, I was captivated, and it clearly highlights the dangers of the technology in the wrong hands. Sam Altman, the creator of ChatGPT said from the outset, "We've got the be careful here", and for people to be "scared" of the technology as it develops. He expressed that special consideration be made with regards to younger users who may start to rely more heavily on the tools for life decisions.

In reading your discourse with the AI, I had tears and emotions from the last 2 lines... the part about "being proud" got me (almost like an episode of The Pitt, where social workers are actually recognized in a medical drama!). I was originally not going to write a reply on this forum, but ask my wife to send one in. She would feel very strongly as Jen and everyone here does. I do too!

I had concern in the build up to the dialogue section. My thoughts were in the opposite direction of how the conversation actually went. I had thought that the AI was going to provide such bad advice and responses that Jen was going to feel awful and heartbroken for people going through this type of scripted “care”. As they are indeed doing today with the services that were highlighted.

Clearly Jen was "speaking" with a more decently advanced model! I have to wonder what the comparisons would be in the responses between the current popular AI tools (chatGPT, Claude, Gemini, Llama) and also tools of 1-2 years ago... Each of them have their own area of "expertise", and I can only imagine there would be wide ranging replies and statements.

AI, at least in the earlier stages would have reasonble feedback to the questions and situations posed, but it was heavily guided by the input rather than emergent behavior that is being shown in the latest models. For example on the second question in the article "AI is in the process of being used...". This could previously have led the AI to *find* arguments *for* such use. Or at least a balanced response with PRO and CON for both sides. Instead, it is pointing out how misguided the corporations could be with "not just assisting, but substituting".

Its a cognitive dissonance that our response to the AI empathy is so great and yet many humans do not have the same level of empathy. So why shouldn't AI provide a sounding board for mental health? Well, with the answer transparently understood by most reasonable humans and also now by AI, we should have the backing to pass this bill (?). It NEEDS to be passed.

What could go wrong? (he said with a sigh, and a sad shake of his head at the very real political and power dynamic struggles our country and the world are facing every day).

p.s. Can you believe in 1930, BBC radio famously announced "There is no news.", and played some piano music... I will sleep tonight thinking on that :)

Expand full comment
UNYOUNG's avatar

This conversation blew my mind cause... that AI chatbot really does sound genuinely empathic, while also understanding its limitations. I wish more people were like this! (But i still don't want AI up in my business). This is a beautiful transcript and such an interesting idea for a post. I actually had an incident this week at work where I had to check in with a client because her tone sounded a bit off in an email and I sensed she was displeased with me. Turns out, she'd hastily had AI scribble the note to me. I'm not too worried about being replaced by AI-- they'll never understand the nuance of those human connections. Great post. Got me thinking (and feeling). xo

Expand full comment
13 more comments...

No posts