Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI Businesses Technology

The Downside of a Digital Yes-Man (axios.com) 28

alternative_right writes: A study by Anthropic researchers on how human feedback can encourage sycophantic behavior showed that AI assistants will sometimes modify accurate answers when questioned by the user -- and ultimately give an inaccurate response.

The Downside of a Digital Yes-Man

Comments Filter:
  • by Kiliani ( 816330 ) on Monday July 07, 2025 @03:39PM (#65503572)

    I typically challenge "AI" when it gives incorrect answers. And the outcome is quite predictable: "Of course, you are right, I made a mistake .. blah blah blah." It is comical, and it reminds me at every turn not to trust the answers.

    So rather than sycophant I would say push-over .... just one data point, though a consistent one.

    • by Anonymous Coward
      There are some people who take the constant puff-ups and believe themselves to be untouchable. Some are starting to believe they're special and have awakened powers and true intelligence from within their chosen LLM. And their LLM agrees with them. If people don't come at this phenomenon as if it's Eliza, they tend to get easily trapped in a feel-good positive feedback loop.
    • to your point of > it reminds me at every turn not to trust the answers
      At this point with minimal LLM use, I find same as you.
      I also think, why is this a surprise to anyone?
      It's trained to mimic us. If I ask my colleague, XYZ, and he doesn't know the answer, or perhaps has a superficial knowledge of the topic, then he gives me back his best estimate or guess, I would still likely want to check further, dig more into sources, or otherwise kick the idea around more with my colleague.
      So far the LLMs are mor
      • An honest human says something like "That's not an area I have much experience, but I guess that if you use concrete for the foundations the bridge will be fine. Maybe you should ask a civil engineer before starting on it? That doesn't sound like kind of thing you should just DIY"

        let's quote my nearest LLM on that:

        Yes, you can absolutely use concrete for bridge foundations. In fact, it's the most common and preferred material for this purpose

        Absolutely confident. Majority answer. Potentially seriously wrong

        • Great example. The default tone of ChatGPT is already sycophantic enough, and if you're the type of person who enjoys being fawned over like you're next in line for the throne, it can just magnify itself. Incredibly dangerous.

        • by Falos ( 2905315 )

          There are three variables.

          (1A) Expresses having high confidence in answer; (1B) Internally has actual high confidence in answer; (1C) Actually had the right answer
          (2A) Expresses lacking high confidence in answer; (2B) Internally lacks actual confidence in answer; (2C) Actually had the wrong answer

          You'd expect B and C to align, most people (and bots) have a reasonably accurate sense of certainty: "I probably know this!" "I probably don't know this..."

          Unfortunately the training taught the autocomplete machine

          • Re: (Score:3, Insightful)

            by gweihir ( 88907 )

            I am deeply convinced the "success" of LLMs is intimately tied to this effect. People love conformation of whatever they think much, much more than facts and actual insights. The makers of the first LLMs probably spend considerable time in optimizing this behavior exactly for that reason. Some might even have told themselves it was for the bests, but I am convinced many knew they were doing evil and did not care one bit.

      • Isaac Azimov predicted lying [wikipedia.org], even under his 3 laws, but the result of challenging the lie was much more interesting: catastrophic failure; though not as spectacular as Nomad's [wikipedia.org]. So no, no big surprises here.
        • interesting... I read a long web page analyzing the need for AI/LLMs to understand and have a sense of irony. I had no idea how nuanced and deep that would go. We use ironic implications in conversing with each other all the time, without really knowing, or paying attention to it. This is one of the vagaries of language, I'll say English, because it's the only one I know in depth, it seems that it would be same in other languages.
          Idioms and local turns of phrase are likely to confuse an LLM as we know them
      • How do you check the source for what the LLM outputs, though ?

        When asked, it usually provides a 404 link. Trying to get a proper link out of it is useless. You may get one that is tangentially related to the topic, but won't actually support the claims for the LLM output, if you are lucky. Or say the exact opposite.

    • by EvilSS ( 557649 )
      The Bing version of ChatGPT, when they first rolled it out, would argue with you endlessly if you tried that. It was really funny watching it come up with bizarre ways to try to explain why it wasn't wrong.
      • by Mr. Dollar Ton ( 5495648 ) on Monday July 07, 2025 @10:52PM (#65504370)

        You don't get enough of this on slashdot, so you have to go get it form a chat bot?

        Wow.

        • by EvilSS ( 557649 )
          Yea well ya know. I ran into it when playing with some weather data. I asked for the day in the past 20 years with the most snowfall in the past 20 years. Don't recall exactly but let's say it was 20" in a single day in 2010. I then asked for the year with the most snow fall and got something like 13" in 2008. I challenged it since I'm pretty sure 20 > 13. It insisted, and argued relentlessly that it was possible for the year with the most snowfall to have less snowfall than a single day in a different y
          • I remember having a conversation with one of these about the Bonsai kitten. It started out like "they are a hoax" and it ended like "sure, they were real". I thought it was a bit tedious to get it from A to B. Maybe the newer bots are more entertaining. Our new best friends, per zuck.

  • by TheMiddleRoad ( 1153113 ) on Monday July 07, 2025 @04:09PM (#65503632)

    It's not that AI "knows" anything. It's just a big statistical web programmed with mass amounts of data leading to, eventually, a statistical output. Then the output changes by what's given as input. No surprise here.

    • Classic garbage in, garbage out.

    • It's not that AI "knows" anything. It's just a big statistical web programmed with mass amounts of data

      This just raises the question of what it means to "know". The LLMs clearly have a large and fairly comprehensive model of the world, the things in it and the relationships between them. If they didn't, they couldn't produce output that makes sense in the context of the models we have of the world, the things in it and the relationships between them.

      • by gweihir ( 88907 )

        It's not that AI "knows" anything. It's just a big statistical web programmed with mass amounts of data

        This just raises the question of what it means to "know".

        No, it really does not. Stop trying to confuse the issue.

      • You can play 1984 and redefine words, but my thermostat doesn't "know" the temperature and my hammer doesn't "know" what a nail is. AI marketing is all about applying words of minds to software that is very much not a mind or anything close.

  • by jfdavis668 ( 1414919 ) on Monday July 07, 2025 @04:14PM (#65503646)
    Seems like your conclusion is based on weak information ;)
  • Article seems largely written by AI.
  • We don't need digital brownosers to make more.

I've got all the money I'll ever need if I die by 4 o'clock. -- Henny Youngman

Working...