


The Downside of a Digital Yes-Man (axios.com) 28
alternative_right writes: A study by Anthropic researchers on how human feedback can encourage sycophantic behavior showed that AI assistants will sometimes modify accurate answers when questioned by the user -- and ultimately give an inaccurate response.
I sure observe the opposite (Score:4, Insightful)
I typically challenge "AI" when it gives incorrect answers. And the outcome is quite predictable: "Of course, you are right, I made a mistake .. blah blah blah." It is comical, and it reminds me at every turn not to trust the answers.
So rather than sycophant I would say push-over .... just one data point, though a consistent one.
Re: (Score:1)
Re: (Score:2)
At this point with minimal LLM use, I find same as you.
I also think, why is this a surprise to anyone?
It's trained to mimic us. If I ask my colleague, XYZ, and he doesn't know the answer, or perhaps has a superficial knowledge of the topic, then he gives me back his best estimate or guess, I would still likely want to check further, dig more into sources, or otherwise kick the idea around more with my colleague.
So far the LLMs are mor
Re: (Score:3)
An honest human says something like "That's not an area I have much experience, but I guess that if you use concrete for the foundations the bridge will be fine. Maybe you should ask a civil engineer before starting on it? That doesn't sound like kind of thing you should just DIY"
let's quote my nearest LLM on that:
Absolutely confident. Majority answer. Potentially seriously wrong
Re: (Score:2)
Great example. The default tone of ChatGPT is already sycophantic enough, and if you're the type of person who enjoys being fawned over like you're next in line for the throne, it can just magnify itself. Incredibly dangerous.
Re: (Score:3)
There are three variables.
(1A) Expresses having high confidence in answer; (1B) Internally has actual high confidence in answer; (1C) Actually had the right answer
(2A) Expresses lacking high confidence in answer; (2B) Internally lacks actual confidence in answer; (2C) Actually had the wrong answer
You'd expect B and C to align, most people (and bots) have a reasonably accurate sense of certainty: "I probably know this!" "I probably don't know this..."
Unfortunately the training taught the autocomplete machine
Re: (Score:3, Insightful)
I am deeply convinced the "success" of LLMs is intimately tied to this effect. People love conformation of whatever they think much, much more than facts and actual insights. The makers of the first LLMs probably spend considerable time in optimizing this behavior exactly for that reason. Some might even have told themselves it was for the bests, but I am convinced many knew they were doing evil and did not care one bit.
Re: (Score:3)
Re: (Score:3)
Idioms and local turns of phrase are likely to confuse an LLM as we know them
Re: I sure observe the opposite (Score:2)
How do you check the source for what the LLM outputs, though ?
When asked, it usually provides a 404 link. Trying to get a proper link out of it is useless. You may get one that is tangentially related to the topic, but won't actually support the claims for the LLM output, if you are lucky. Or say the exact opposite.
Re: (Score:2)
Re:I sure observe the opposite (Score:4, Funny)
You don't get enough of this on slashdot, so you have to go get it form a chat bot?
Wow.
Re: (Score:2)
Re: (Score:2)
I remember having a conversation with one of these about the Bonsai kitten. It started out like "they are a hoax" and it ended like "sure, they were real". I thought it was a bit tedious to get it from A to B. Maybe the newer bots are more entertaining. Our new best friends, per zuck.
Is this a surprise? (Score:3)
It's not that AI "knows" anything. It's just a big statistical web programmed with mass amounts of data leading to, eventually, a statistical output. Then the output changes by what's given as input. No surprise here.
Re: (Score:3)
Classic garbage in, garbage out.
Re: (Score:2)
Good or bad data, it still is the same thing.
Re: (Score:2)
It's not that AI "knows" anything. It's just a big statistical web programmed with mass amounts of data
This just raises the question of what it means to "know". The LLMs clearly have a large and fairly comprehensive model of the world, the things in it and the relationships between them. If they didn't, they couldn't produce output that makes sense in the context of the models we have of the world, the things in it and the relationships between them.
Re: (Score:3)
It's not that AI "knows" anything. It's just a big statistical web programmed with mass amounts of data
This just raises the question of what it means to "know".
No, it really does not. Stop trying to confuse the issue.
Re: (Score:2)
You can play 1984 and redefine words, but my thermostat doesn't "know" the temperature and my hammer doesn't "know" what a nail is. AI marketing is all about applying words of minds to software that is very much not a mind or anything close.
Are you sure this is correct? (Score:3)
AI slop (Score:1)
That's an excellent point, (Score:2)
.
And that's a really nice tie you're wearing (Score:2)
:-)
Re: (Score:2)
You're absolutely correct,
We Already Have Enough Sociopaths (Score:2)