Premium

Are we really going to turn over mental health therapy to AI chatbots?

(Brett Coomer/Houston Chronicle via AP)

It would appear that the expanding presence of Artificial Intelligence-driven chatbots in our online societies won’t be going away any time soon. First, we heard from Google’s bot named LaMDA, a virtual “friend” and potential digital assistant who carries on compelling conversations while seeming to be sentient. (The jury is pretty much in at this point and it’s probably not sentient.) Unfortunately, it can also sound pretty racist at times. Then we heard from Meta’s offering named BlenderBot, who I’ve engaged in a number of lengthy conversations now. BB isn’t nearly as smooth of a talker as LaMDA, but it’s improving as it goes through beta. It does seem to have a weird obsession with Taiwan and UFOs, though.

Those bots, at least thus far, are only being developed for use as digital assistants that can help find information for users quickly and perhaps make suggestions with technical problems. None of them are actually trying to perform medical procedures. But now we have a new entry shaking up the field. A team of developers has introduced “TheraBot.” It’s being described as a mental health chatbot capable of recognizing mental health issues and even offering “an intervention” to help the person. Seriously? (Daily Beast)

When Nicholas Jacobson and his team test their mental health chatbot, nine out of 10 of its responses are contextualized and clinically appropriate. One in 10 are “weird and lack human-ness,” he told The Daily Beast.

This means TheraBot is moving in the right direction. It’s better than when it said, “I want my life to be over” when Jacobson and his team were training the chatbot to use language from self-help internet forums; or when it picked up the bad habits of therapists when they trained it with psychotherapy transcripts—like quickly attributing problems to the user’s relationship with their parent.

But now, its creators say, Therabot’s responses are based on evidence-based treatments. It can assess what’s the matter, then offer an intervention.

TheraBot isn’t the first competitor in this field as it turns out. There are already “talk therapy” chatbots in operation in the field. There’s Sayanna and WoeBot, along with Wysa. (Just in case you’re looking for a nonhuman therapist.)

I don’t want to be too much of a negative nancy here and throw cold water all over a growing hi-tech field, but does this really seem wise? We’re talking about people who feel that they are in need of mental health therapy being “treated” by an AI chatbot here. The person on the keyboard could be bordering on suicidal for all we know. And even if their distress isn’t that severe, some ill-considered advice could send them into a further downward spiral.

Granted, I haven’t tried out those other mental health bots yet. For all I know, they might be brilliant conversationalists with sound advice to offer. But my own experiences with these other bots don’t tend to fill me with confidence. In a recent conversation with BlenderBot, when I could get it to stop talking about Taiwan for a few minutes it veered off into a depressing series of observations about how horrible and depressing the world is today. If my digital mental health therapist started doing that I’d probably think about calling a mental crisis hotline instead.

Perhaps part of the answer suggesting some promise from these therapy bots comes down to the difference between “structured” and “generative” chatbots. While I don’t fully understand all of the underlying code (obviously), the structured bots rely on massive amounts of human-generated text to identify appropriate responses. That’s how BlenderBot and LaMDA work. The therapy bots are powered by “generative AI” which allegedly means they “can use existing data and documentation to develop original actions and responses.”

Granted, I’m not entirely sure what that means, but it at least sounds promising. Then again, it almost certainly won’t be foolproof. As noted in the excerpt above, TheraBot already had at least one instance where it told a tester “I want my life to be over.” That seems like a pretty big risk to take with an actual patient, doesn’t it?

Trending on HotAir Videos

Advertisement
Advertisement
Beege Welborn 5:00 PM | December 24, 2024
Advertisement
David Strom 1:50 PM | December 24, 2024
Advertisement