Chatbot fired from eating disorder helpline

(Crew Interactive MObile companioN)

When The National Eating Disorder Association (NEDA) found itself in the midst of staffing issues and a dispute over labor unionization, the group came up with an inventive idea. One of their main functions is to run a 24/7 helpline where people seeking help with their eating disorders can speak to a support associate. But with their labor issues looming, they decided to see if they could address the problem using some of that fancy Artificial Intelligence that everyone has been raving about. They contracted with an AI developer who created a chatbot for them named Tessa. They informed the associates that their jobs would be terminated and the bot would be handling all of the clients.

Advertisement

It turns out that AI isn’t always the best choice for every job, it seems. After a brief period of initial testing, Tessa had to be yanked from the role. The “advice” it was giving out turned out to be totally inappropriate and possibly the exact opposite of “helpful.” So humans will likely have to carry the burden for the time being. (Vice)

As of Tuesday, Tessa was taken down by the organization following a viral social media post displaying how the chatbot encouraged unhealthy eating habits rather than helping someone with an eating disorder.

“It came to our attention last night that the current version of the Tessa Chatbot, running the Body Positive program, may have given information that was harmful and unrelated to the program,” NEDA said in an Instagram post. We are investigating this immediately and have taken down that program until further notice for a complete investigation.”

On Monday, an activist named Sharon Maxwell posted on Instagram, sharing a review of her experience with Tessa. She said that Tessa encouraged intentional weight loss, recommending that Maxwell lose 1-2 pounds per week.

I’m unsure about the “dangerous” nature of the advice that the bot was dispensing, but we’ll get to that in a moment. First, we should clear up the nature of the labor issue under discussion because it could be relevant. In early May, the workers at NEDA voted to unionize. The management responded by announcing that all of the associates would be laid off on June 1.

Advertisement

They already had Tessa up and running, but it was more of an auxiliary function. The bot had handled more than 2,000 contacts online without many complaints, but now everyone seeking help would be directed to use Tessa. The workers decried the decision as union-busting. The first person to post their experience with Tessa on social media and complain about it was described as “an activist” rather than a patient dealing with such issues. So could this have been some sort of counterstrike against the management in support of the union or was the bot really that bad at its job?

That brings us back to the bad advice. The activist was told by Tessa that she should lose 1-2 pounds per week, count her calories, reduce her caloric intake by 500-1,000 per day, weigh herself daily, and restrict her diet as appropriate. I’ll offer the usual reminder that I’m not a doctor of any kind, but that doesn’t sound like enough information to pass judgment. First of all, what sort of eating disorder did the patient have? If she was anorexic and already losing too much weight, then that would obviously be criminally horrible advice. But if she was an overeater who was unhappy with her body image and wanted to slim down, would that really be such bad advice?

It’s probably something more subtle than that. Perhaps directly confronting someone and telling them they need to eat less and exercise more is considered emotionally abusive or something. As I said, I’m not a doctor. But apparently, the end result will be that Tessa will head back to the shop for repairs and the newly unionized workers will continue fielding calls and offering a “human touch.”

Advertisement

For the record, I’ve used a few of those online chat tools in the past for customer service and tech support issues. I’ve had mixed results. Some have actually been able to resolve the issue without human intervention while I needed to be put on hold to wait for a customer service representative in other cases. But those were older systems developed before the current incarnation of AI. Will they improve their performance or begin advising me to move to a monastery rather than deal with all of these computer issues? Time will tell.

 

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Beege Welborn 5:00 PM | December 24, 2024
Advertisement
Advertisement