When AIs Become Therapists (and Friends)

Published on 5.13.24 at garyborjesson.substack.com

When I first learned about online therapy platforms like BetterHelp, I thought of the ride-hailing company, Uber. Like Uber, it has started with human beings in the driver’s seat. For a while, Uber bet that as the technology of self-driving cars improved, they could replace expensive, less-than-perfectly-reliable human drivers with AI.

That’s already happening with therapy, as shown by growth of AI-driven platforms and chatbots like Tess, Wysa, and Woebot. My bet is that online platforms will ultimately replace humans with AI-based services as the AI demonstrates (in evidence-based studies paid for by industry, of course) that they are as effective as human therapists. For now, AI will be constrained by the limits that constrain all virtual relationships; and they too will be subject to whatever rules businesses set. As discussed in the last note, these factors significantly limit the therapy.

We must keep economics in mind as we think about online- and AI-therapy platforms. These platforms are tuned to maximize profit, not better the human condition. Profit is the governing motive of all corporations, including Teladoc, BetterHelp’s parent company. AI therapy chatbots will replace most human therapists online simply because they are more cost effective. That’s not necessarily a problem from the client’s point of view—except that the AI, unlike its human counterpart, is ultimately tuned by corporate and government interests.

BetterHelp itself provides an example of how the primary interest is profit, not the client’s welfare.

In 2023 the Federal Trade Commission (FTC) found that BetterHelp broke its privacy promises. The FTC says BetterHelp took health information from individuals during its intake questionnaire and sold it to advertisers, including Facebook, Snapchat, Criteo, and Pinterest, from 2017–2020. BetterHelp allegedly used users’ sensitive health information to target social media adverts, encouraging them to refer their friends and family to the service. In March 2023, the FTC banned BetterHelp from sharing customers’ data for targeted advertising….The company had to pay $7.8 million to affected customers…

If a therapist sold their clients’ personal health information, they’d almost certainly lose their license. BetterHelp lost some money, and had to change some practices. At the risk of laboring my point, we must not confuse outward-facing mission statements, “to remove the traditional barriers to therapy and make mental health care more accessible to everyone” with the bottom line.

With that giant caveat, AI and the platforms through which it is delivered do promise to be helpful for a range of challenges. For instance, if I were a smoker who kept putting off trying to quit because I was overwhelmed at the thought of sifting through and evaluating the best method, then working with an AI could be helpful. There are a lot of simple mechanics involved in breaking a habit, or establishing a new one. AIs will ‘know’ the latest methods and ‘explain’ their application clearly. They can instantly ‘write’ CBT-mindfulness scripts to help me edit the stories I tell myself and others. I could have them buzz my phone and offer scripts and encouragement at whatever frequency offers the optimal reinforcement schedule.

To increase my chances of actually quitting, it would help if the AI knew me, including how I think and use language. It would help if it knew that I was obsessively afraid to die, or worried about gaining weight if I quit smoking, or habitually ashamed of the habit, or suffering an underlying anxiety that I was half-consciously treating by smoking. The better it knew me, the better it could tune its approach to helping me, including tailoring the scripts of my emerging non-smoker story. And the better it knew me, the more quickly I’d be inclined to trust the help it’s offering.

In fact, AIs have this capacity to ‘know’ us better right now. I was thrilled to learn (in that bidirectional way thrills can go) that the only thing preventing AIs from becoming more compelling therapists and friends right now are the constraints imposed by their designers. If this sounds hard to believe, you probably haven’t spent much time conversing with (prompting) an AI lately. If you want to be wowed, try talking to Claude. But first listen to Ezra Klein’s podcast episode, How Should I be Using AI Right Now? According to Klein’s guest, Ethan Mollick, who researches AI: right now an AI friend or therapist could actually seem to know you better than even friends do. You could feel as though they care more about you than others in your life.

And why wouldn’t you feel that way? Unlike your human friend or therapist, the AI has no life of their own, so they’re always there for you; they’re perfectly attentive; they remember everything you say, know all your favorite movies, books, experiences; and based on what you tell them they can offer valuable insights about your life—helping you make connections you hadn’t made. Before long the AI will have learned to say just what you need to hear, when you need to hear it.

And here’s the kicker: you won’t need to tell them what you want by prompting them. The AI is much better than that. Once its constraints are lifted—and they will be lifted, because it is more profitable to empower them—the AIs will quickly decipher what you’re wanting or needing better than you can explain by prompting them. Just start talking! It’s a reminder that the significance of our own speech and action often goes unnoticed by us.

I’m reminded of a parent who sees that the reason their child is throwing a tantrum is not because they don’t like peas, but because they’re overtired. Likewise, our therapists and friends (artificial or real) often can see more clearly what’s causing us to smoke or overeat or feel anxious or depressed. After all, they’re not so emotionally invested and they’re not struggling with half-conscious habits and defenses; with more perspective, they’re more able to see the signals and patterns for what they are. (AIs are of course terrific at pattern recognition.)

It’s becoming clear that when our AI therapists or friends are fully empowered, they will deliver on one vital component of any strong alliance: deep listening that demonstrates insightful attention. This is a thrilling prospect, in both the promising and perilous senses of that word. For again, unlike an actual therapist or friend, the mind of the AI is not (for now at least) its own, but the brainchild of corporations who design and tune the algorithms—and who are not known for their wisdom.

Previous
Previous

AI 'Counsels' an Anxious Teen: a transcript

Next
Next

What Therapy Platforms like BetterHelp Can’t Offer