Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI IT Technology

US Eating Disorder Helpline Takes Down AI Chatbot Over Harmful Advice (theguardian.com) 149

The National Eating Disorder Association (Neda) has taken down an artificial intelligence chatbot, "Tessa," after reports that the chatbot was providing harmful advice. From a report: Neda has been under criticism over the last few months after it fired four employees in March who worked for its helpline and had formed a union. The helpline allowed people to call, text or message volunteers who offered support and resources to those concerned about an eating disorder. Members of the union, Helpline Associates United, say they were fired days after their union election was certified. The union has filed unfair labor practice charges with the National Labor Relations Board.

Tessa, which Neda claims was never meant to replace the helpline workers, almost immediately ran into problems. On Monday, activist Sharon Maxwell posted on Instagram that Tessa offered her "healthy eating tips" and advice on how to lose weight. The chatbot recommended a calorie deficit of 500 to 1,000 calories a day and weekly weighing and measuring to keep track of weight. "If I had accessed this chatbot when I was in the throes of my eating disorder, I would NOT have gotten help for my ED. If I had not gotten help, I would not still be alive today," Maxwell wrote. "It is beyond time for Neda to step aside."

This discussion has been archived. No new comments can be posted.

US Eating Disorder Helpline Takes Down AI Chatbot Over Harmful Advice

Comments Filter:
  • Only the Dunning-Kruger-ass incels who make up Silicon Valley and their sycophants in the press wouldn't have seen that coming. Christ.
    • by Tablizer ( 95088 )

      Stupid business people and PHB's are not at all limited to Silicon Valley. They simply make the more "fashionable" screw-ups.

    • This proves it is true that AI will cause mass extinction though, everyone will be committing suicide after being sent to a machine for emotional problems or after the machine gives them bad advice for stocks and they lose their life savings, etc etc.
      • only because people are to stupid to do their own research. Instead they listen to a brain dead psychopathic liar that is the start of the art AI at present.
        • That's not really their fault if the helpline they were depending on chooses to switch to a dead psychopathic liar.
          • Re: (Score:2, Insightful)

            We worship braindead psychopathic liars in America. We elect them to office. We watch "reality" shows about them. We admire their "go get 'em" attitude. Just as well get the party into full-swing by having these dumb machines start making all our decisions for us. I'm sure it'll be great!

      • This proves it is true that AI will cause mass extinction though, everyone will be committing suicide after being sent to a machine for emotional problems or after the machine gives them bad advice for stocks and they lose their life savings, etc etc.

        Bender might advocate "Kill all humans" but he's lazy and getting us to simply kill ourselves would be much less work ... :-)

    • Re:No way. (Score:4, Insightful)

      by nightflameauto ( 6607976 ) on Thursday June 01, 2023 @12:53PM (#63567907)

      Yeah, no. Any executive that's been breathing nothing but their own farts for the last few decades and has their head in the clouds will fall for this bullshit. It's not SillyValley specific.

      And this is why I keep trying to get people to see that AI itself, while not all that dangerous, will be used by dumbasses like these people to do extremely dangerous things. While this was a helpline and you still had to actually do what was asked, when the dipshits put one of these decision tree generated "AI"s in charge of some piece of critical infrastructure because it's cheaper than having a human watch some dial somewhere before making a decision, we're going to see some serious shit. And it ain't gonna be pretty.

    • Re:No way. (Score:5, Interesting)

      by hey! ( 33014 ) on Thursday June 01, 2023 @01:07PM (#63567979) Homepage Journal

      Well, there may in fact be something to that.

      The original chatboat rules for eating disorders were designed by a professor of psychiatry at Washington University Medical School. However she has stated that (a) their rules were never intended to be used as a hotline and (b) that her team did not put the dangerous advice into the chatbot.

      So if the developer of the eating disorders script didn't put the dangerous content into the Tess program, who did? I'm guessing it's the company that sells the chatbot, a Silicon Valley company called Cass (formerly X2AI). If you look on their website, they definitely market Tess as something that *can* quickly and cheaply replace humans in applications like suicide hotlines. It's actually kind of apalling.

  • by Baron_Yam ( 643147 ) on Thursday June 01, 2023 @11:08AM (#63567437)

    Who could possibly have known that a system that takes large amounts of input to run through a vague algorithm to generate similar output filtered by prompts couldn't be trusted to create actually trustworthy output? ...just anybody with half a brain who did their due diligence before choosing a chatbot to replace humans in an important function with serious consequences for failure.

    We can therefore conclude that the people running the helpline are idiots who should NOT be running a helpline of any sort. Presumably they're just average managers who love collecting a salary from a charity that doesn't question them too much about what they actually do.

    • ...just anybody with half a brain who did their due diligence before choosing a chatbot to replace humans in an important function with serious consequences for failure.

      I don't think the decision makers are stupid but just have different goals. Rather than prioritizing helping the people who call the phone lines, these decision makers prioritize minimizing expenses. So, they have been successful. Their only problem now is that they did not want to receive bad publicity. Their hope was to fire the workers, save money, and ignore the effect on the phone callers in peace, but the bad publicity threw a wrench into their plans.

      • They should have anticipated everything that happened here, which would have been obvious if they had known everything. If they knew they didn't know anything, they could have asked someone. Maybe even paid them to give their opinion. Instead they just went off half-cocked and full-asshole (since this was obviously really about unionization) and the results are predictably pathetic.

        Workers want better treatment. In the past, when push has come to shove, they have gotten it. There was a whole lot of shoving,

      • I pointed this out when this story was first posted on Slashdot a few days ago: "decision makers prioritize minimizing expenses" means spending less money, and that money goes into the pockets of the top managers. It's just a true for a non-profit as it is for a profit making organization. In a lot of ways it's easier to grift in the non-profit world because there is less oversight.

        The ultimate non-profit grift is religion. Any possible crime can be hidden from view when "religious freedom" is involved for

    • by Brain-Fu ( 1274756 ) on Thursday June 01, 2023 @11:26AM (#63567491) Homepage Journal

      The employers of the world are extremely eager to replace staff with software. Why wouldn't they be? It's an enormous cost reduction for them.

      Of course, that eagerness will cloud the judgment of some of them, and they will make harmful mistakes like this.

      There isn't really anything we can do to prevent this. Some industry leaders might actually have half a clue and exercise caution and due diligence, and the rest will jump in too soon, get burned, and have a mess to clean up after that. And that's just how things are going to go down in the foreseeable future.

  • It's clear they wanted to reduce headcount and be cheap, and thought they'd get promoted by using whizbang AI.

    It's damn near criminal negligence to use unchecked AI to give health advice *as an organization posturing itself as an expert*.

    • by evanh ( 627108 )

      It would more likely be a promotion assuming it was anyone other than the boss since it busted the unionisation steps.

      If those ex-employees now bring charges against the company, then that might be a reason to fire someone. Presumably new, non-unionised, workers are being employed again as well.

  • I just read back the /. story when they announced this bot, and pretty much every comment predicted exactly this to happen. How bizarre

  • Loose quote (Score:5, Insightful)

    by fluffernutter ( 1411889 ) on Thursday June 01, 2023 @11:41AM (#63567545)
    To quote a lot of the comments from the previous /. article indicating that they were letting go their staff in favor of AI: "No no it's ok, this helpline just gives information to people it does not give them emotional support and AI is good at that".

    I know if I was having emotional issues and I was sent to a robot, not only would I be disgusted but I would probably feel more like no one cares about me and it would have an adverse effect. It doesn't matter what the robot aims to do, all that matters is that this is a person with emotional problems and you are sending them to a freaking heartless machine.

    If this is a sign of things to come, we had better prepare for a great many more suicides.
    • OTOH, way back in the dawn of time, circa 1985-ish, I was in Psych 101 at college and that week's lesson discussed the development and "success" of ELIZA. https://en.wikipedia.org/wiki/... [wikipedia.org] The thing that stood out to me was that some folks actually preferred that they were talking to a chat bot, and a rudimentary chat bot at best, because they were able to express themselves with less worry about judgement than with a human therapist. I can see that. You can't really offend ELIZA, or catch "her" rolling
      • I highly doubt that would apply to the average person with emotional issues.
      • by rahmrh ( 939610 )

        I think everyone misses that the new AI is really at best just a massively improved ELIZA. Given ELIZA was pretty simple but could fool some people some of the time, a significantly improved AI and/or much bigger more detailed ELIZA program would do better. And because it is using AI also produce random dangerous and/or funny unexpected results.

    • I know if I was having emotional issues and I was sent to a robot

      If you were having emotional issues these people aren't there to help you. That's NOT what this helpline is about. The previous article also pointed this out directly to you but you seem to have ignored the entire discussion and are here parroting the same crap.

      Emotions is not at all the issue with this bot or why it was taken down. It seems like it was taken down for reasons AI will always fail: Idiots trick it into saying something stupid and cry about it online. The hint is in the word "activist". The bo

      • This is a helpline for people with eating disorders, meaning everyone who calls should be assumed to have an eating disorder. As I said, it doesn't matter what the line is meant to do, just the clientele it is accepting.

        And if an activist can design questions to get bad answers, then there can be no confidence in it. It is impossible to predict what people will ask it.
  • Tessa, which Neda claims was never meant to replace the helpline workers, almost immediately ran into problems

    And from the earlier post (Seriously msmash you couldn't include a link [slashdot.org] in the fucking summary?)

    the researchers concluded their evaluation of the study, they found the success of Tessa demonstrates the potential advantages of chatbots

    Clearly there's more of a competence issue with the "evaluation" than an AI issue.

  • by account_deleted ( 4530225 ) on Thursday June 01, 2023 @12:32PM (#63567803)
    Comment removed based on user account deletion
  • training, training, training. What my grandmother called GIGO - Garbage In, Garbage Out.
  • non-profits are a weird game. Always, always, always remember that non-profits are businesses, often staffed with people who went to school for non-profit management. It is their career just like an MBA. While they can't make "profits" they are absolutely a business, careers where people get paid sometimes a lot of money. If they hit the lottery, they don't stay in these jobs as volunteers.

    The stories I keep seeing about NEDA taken together just seem like nonsense to attract attention, pushing buttons that

  • So what's the diff?

    Oh, right. Software makes bad advice cheaper.

    Clippy for the win.

No spitting on the Bus! Thank you, The Mgt.

Working...