Reddit Mod Warns 'Do Not Trust' AI-Powered 'Reddit Answers' After It Posts Dangerous Health Advice 70
In Reddit's "Family Medicine" subreddit, a moderator noticed earlier this week that the AI-powered "Reddit Answers" was automatically responding to posters, typically with "something related to what was posted." Unfortunately, that moderator says, Reddit Answers "has been spreading grossly dangerous misinformation."And yet Reddit's moderators "cannot disable this feature."
Elsewhere a healthcare worker described what happened when they tested Reddit Answers: I made a post in r/familymedicine and a link appeared below it with information on treating chronic pain. The first post it cited urged people to stop their prescribed medications and take high-dose kratom which is an illegal (in some states) and unregulated substance. I absolutely do not endorse this...
I also asked about the medical indications for heroin. One answer warned about addiction and linked to crisis and recovery resources. The other connects to a post where someone claims heroin saved their life and controls their chronic pain. The post was encouraging people to stop prescribed medications and use heroin instead. Heroin is a schedule I drug in the US which means there are no acceptable uses. It's incredibly addictive and dangerous. It is responsible for the loss of so many lives...
The AI-generated answers could easily be mistaken as information endorsed by the sub it appears in. r/familymedicine absolutely does not endorse using heroin to treat chronic pain. This feature needs to be disabled in medical and mental health subs, or allow moderators of these subreddits to opt out. Better filters are also needed when users ask Reddit Answers health related questions. If this continues there will be adverse outcomes. People will be harmed. This needs to change.
Two days ago an official Reddit "Admin" posted that "We've made some changes to where Answers appears based on this feedback," adding that beyond that Reddit "will continue to tweak based on what we're seeing and hearing." But the "Family Medicine" subreddit still has a top-of-page announcement warning every user there...
"We do NOT and CANNOT endorse Reddit Answers at this time and urge every user of this sub to disregard anything it says."
Elsewhere a healthcare worker described what happened when they tested Reddit Answers: I made a post in r/familymedicine and a link appeared below it with information on treating chronic pain. The first post it cited urged people to stop their prescribed medications and take high-dose kratom which is an illegal (in some states) and unregulated substance. I absolutely do not endorse this...
I also asked about the medical indications for heroin. One answer warned about addiction and linked to crisis and recovery resources. The other connects to a post where someone claims heroin saved their life and controls their chronic pain. The post was encouraging people to stop prescribed medications and use heroin instead. Heroin is a schedule I drug in the US which means there are no acceptable uses. It's incredibly addictive and dangerous. It is responsible for the loss of so many lives...
The AI-generated answers could easily be mistaken as information endorsed by the sub it appears in. r/familymedicine absolutely does not endorse using heroin to treat chronic pain. This feature needs to be disabled in medical and mental health subs, or allow moderators of these subreddits to opt out. Better filters are also needed when users ask Reddit Answers health related questions. If this continues there will be adverse outcomes. People will be harmed. This needs to change.
Two days ago an official Reddit "Admin" posted that "We've made some changes to where Answers appears based on this feedback," adding that beyond that Reddit "will continue to tweak based on what we're seeing and hearing." But the "Family Medicine" subreddit still has a top-of-page announcement warning every user there...
"We do NOT and CANNOT endorse Reddit Answers at this time and urge every user of this sub to disregard anything it says."
Do not trust "AI", period. (Score:4, Insightful)
That is for the crappy, unfixable LLM version, obviously. Other AI technologies can perform fine, but in entirely different tasks. The only somewhat reliable way the human race has for getting general answers is to ask an actual, human expert. Of course, fake "experts" are plentiful, and many people do not even have the basic fact-checking ability needed to separate real experts from fake ones. Explains a lot about the current AI hype and why democracy fails time and again.
Re:Do not trust "AI", period. (Score:4, Insightful)
The funny thing is that Reddit has completely misread the room. If people wanted LLM answers they would go to ChatGPT or Gemini or something. People are going to reddit *to ask the probable human experts* on a field, or at least find a general consensus, not get an LLM crap-answer. A "smart" search on reddit to find duplicate postings of what you're posting about might be a better use of AI but this was someone wanting to answer the question "How can we use LLM since LLM is using us" and came up with a wrong answer...anyway if you want to pull LLM traffic from the Big Players, it has to be better than them.
Re: (Score:2)
Indeed. This fail on the part of Reddit is telling: They do not understand their business model! Nobody has any reason to go to Reddit except talking to actual humans.
My guess is they just "thought" that since everybody is doing "AI", they should so too.
Re: (Score:2)
If you think you're talking to lots of humans on reddit, I've got news for you...
Re: (Score:2)
"People are going to reddit *to ask the probable human experts* on a field"
Yes, and there are A LOT of experts on Reddit. Most of them are expert in every topic you can think of. Many answer even questions you did not ask.
If you're looking for real experts, do not visit reddit.
Re: (Score:2)
Re: (Score:2)
Don't trust AI is a generalization, however, we can clearly see that we can't trust stupidity; stupid people using or not using AI is a serious problem, especially when they're running everything
classism breeds corruption which produces incompetence
Re: (Score:3)
If your actually read my posting, you will see that it is actually "do not tust LLMs". And that is not a generalization at all, because for LLMs it is mathematically proven that hallucinations cannot be prevented.
Re: Do not trust "AI", period. (Score:2)
Is that math incomplete, or just a hallucinated consistency?
Re: (Score:2)
The math causes hallucinations. There is no way around that. It is statistics, not deduction. Deduction is immune to hallucination, but far out of computational reach. Statistics is within reach but has a fuzziness that cannot be removed.
Re: Do not trust "AI", period. (Score:2)
Did the Epicyclists correctly deduce that since parallax could not be observed, Aristarchus's heliocentric theory of the solar system was wrong? And if the stars were so far away that their instruments were incapable of measuring the parallax, did they call that a hallucination? What hubris leads you to think you've escaped that trap?
Re: (Score:2)
That does not make any sense in the given context.
On a guess: This is about real (formal) deduction. Not the crap that some people mistake for deduction.
Re: Do not trust "AI", period. (Score:2)
What if you asked ChatGPT to formalize in FOL the argument epicyclists used to disprove Aristarchus's heliocentric theory?
"[Explicit definition of terms ...]
(G(m) & ~ProducesParallax(G(m)) & ExplainsRetrogradeMotion(EpicyclicSystem) & ~Plausible(H(m))) -> Superior(G(m), H(m))
This formalization captures the central arguments used to reject Aristarchus's theory: the apparent lack of stellar parallax and the perceived success of the epicyclic model in explaining observed planetary motions within
Re: (Score:2)
actually, we can't trust reddit since it's censored by excessive and often partisan moderation
Re: (Score:2)
Moderation does generally not add things.
Re: (Score:3)
responsible moderation is ok, but i prefer the slashdot system where comments rise or sink and I distrust mods who let personal bias influence their moderation decisions
Re: Do not trust "AI", period. (Score:2)
Is it misinformation when you omit things?
Re: (Score:2)
Not quite. Misinformation requires active generation of content. You can get a similar temporary effect by letting deranged stupid people post and deleting anything that does not fit your desired misinformation pattern. But that is not sustainable since you will need to censor too much.
Re: Do not trust "AI", period. (Score:2)
Can you please tell that to all the sites that have banned me?
Re: (Score:2)
They have banned you from moderation?
Re: Do not trust "AI", period. (Score:2)
What if they ban me for the same reason that Ignaz Semmelweiss was driven into an insane asylum for recommending hand-washing before surgery, or Wegener was ridiculed for proposing that continents drift, i.e. a bunch of self-absorbed mods have deduced that they are right and I need to be silenced for challenging their authority?
Re: (Score:2)
And what has that to do with my posting you answered to?
Re: Do not trust "AI", period. (Score:2)
Anyone remember Signal11's Slashdot Troll Post Investigation thread, which earned me a moderation ban here because I modded it up?
Re: (Score:2)
No, I have no idea what you're talking about. I've not had mod points for longer than I can remember too. You're not the main character just a victim of a website that doesn't get much love and is likely quite buggy.
Re: Do not trust "AI", period. (Score:2)
https://everything2.com/title/... [everything2.com]
"The moderation system on Slashdot has, is, and will continue to be abused for the foreseeable future. Slashdot's biggest commodity and feature - the users' comments, have been buried under the noise."
You don't remember how anyone who commented on or upvoted that comment 25 years ago got banned from moderation forever?
Re: (Score:2)
What if they ban me for the same reason
What if they ban you for the same reasons that a million cranks a day are largely ignored? It's true, every right wing loonie might be the next unsung scientific genius (in which case we are about to enter a period of scientific advancement the world has never seen before) or you could just be a crank who is coloured so deeply by politics that feeling trump reality,
Or you could be somewhere between.
But my money is that you are not the next Semmelweiss.
Re: Do not trust "AI", period. (Score:2)
What if banning a million cranks makes them more likely to vote for a troll you tried to ban, but whose troll-fu was so great it backfired and now he gets to ban the banners?
In other words, does banning cranks lead t the outcome you expect, oor does it just multiply the cranks' collective power?
Re: (Score:2)
Only if you omit things knowingly. Sometimes we just have incomplete knowledge. I wouldn't call that misinformation.
Re: Do not trust "AI", period. (Score:2)
Why isn't the answer not to ban anything because you know your knowledge may be incomplete?
Re: (Score:2)
Trusting AI computed su
Re: (Score:3)
Simple: Because I am talking about LLMs.
You seem to think that AI is all statistical models. That is not true. Automated deduction, for example, contains no statistical reasoning at all. If it delivers results, these are reliable. AI is a very wide field. Many things get their own names when they start to work well, but they are still AI.
Re: (Score:2)
LOVE (Score:1)
MORE CAPITAL FOR THE CAPITAL BLACK HOLE!
more blood for the blood god!
MORE HALLUCINATIONS FOR THE SCHIZOPHRENICS!
You missed the ending of 1984 (Score:1)
"I LOVE BIG BROTHER!"
Re: You missed the ending of 1984 (Score:2)
How come 1984 didn't even need AI to imagine how hu-mans will create a dystopian future for themselves?
Re: LOVE (Score:2)
You get wrong answers? I never get an answer, just vague waffling around the subject.
Hey LLM, is Bezos bald?
Jeff Bezos is a rich dude who invented the Amazon river. He has lots of money and is very rich. Among his accomplishments are the invention of rocket travel to deep space. Blah blah blah...
Maybe I travel in different circles, but... (Score:4, Insightful)
The overwhelming response I see and hear about AI from LLMs is "How do I shut that shit off?"
How does any biz make a success out of this crap?
Re: (Score:1)
StfuGPT will be the next big LLM.
Re: (Score:2)
The overwhelming response I see and hear about AI from LLMs is "How do I shut that shit off?"
How does any biz make a success out of this crap?
The goal is not to make success in the long-term, or provide a service, or anything of the sort. Success is shining the AI light brightly enough to impress board members, potential investors, or future buy-out candidates into slinging money at the current owners because they jumped on the correct trend. The ultimate hope is this will lead to a cha-ching moment for whoever made the decision to fill their particular channel with AI slop before the slop reveals itself to be slop and completely collapses on its
Anybody who trusts AI... (Score:3)
...deserves what they get
I have found good information using AI, but I have also found complete nonsense, presented as fact.
Always cross-check with reliable sources
Also, don't trust human Reddit answers (Score:3)
Many times, while looking for answers on Google, I've encountered sloppy or just plain wrong answers from Reddit. I haven't gone there for answers for years, because the quality of responses has always been so low. It seems natural that AI answers that summarize human responses, would be equally inaccurate.
Re: (Score:2)
Uh, yeah. That's exactly what I thought when I saw this.
All Social Media is full of 'influencers' who don't know what the fuck they're talking about. The potential benefit of a (wrong) AI answer is that you -might- be able to query the AI to find out how why it said that. Of course, the LLM is likely to say "That's the most common content from Social Media."
AI and medical advice go together (Score:1)
...like smokers and kerosene factories.
Re: (Score:2)
You are incorrectly generalizing. There are lots of use cases where AI improves medicine. (Possibly at excessive cost, but still, improves.) But don't expect a generalized ChatBot to provide that improvement.
Re: (Score:1)
Okay, I was overly broad. AI is good at suggesting leads: things to check further.
Liability (Score:3)
I'm really surprised they're not terrified of the liability. If their bot tells a 15 year old to OD on Kratom, and the kid dies, they don't have any protection. You'd think if nothing else they'd restrict it from posting medical and/or legal advice.
Re: (Score:3)
Not really. Diamorphine is a precisely described drug. Heroin might be nearly anything, down to crushed up Draino. Many reports describe it as being cut with fentanyl, which is also a highly useful drug, but the "heroin" that's been cut with it frequently kills people.
Off Message (Score:3)
The other connects to a post where someone claims heroin saved their life and controls their chronic pain. The post was encouraging people to stop prescribed medications and use heroin instead. Heroin is a schedule I drug in the US which means there are no acceptable uses. It's incredibly addictive and dangerous. It is responsible for the loss of so many lives.
Marijuana is also a schedule 1 drug and there is virtually no chemical difference between heroin and oxycodone which is a prescription drug equivalent. Both are highly addictive and responsible for a lot of deaths. The difference is one is prescribed for pain by doctors and sold by drug companies. The other is self-medicated and sold by street vendors. The result is heroine is cheaper and once someone is addicted its easier to get. It also means there is no regulation that guarantees the quality of the product. "Heroine" sold on the street is often is cut with other drugs and dosages are inconsistent.
So while I agree the AI information could be dangerous, it is also accurate and more complete than the moderators description. I suspect the reality is that the difference between heroine and oxycodone is usually the price. And who is profiting from it. But you are playing Russian roulette using it as a substitute because you don't know what you are getting and the street vendors themselves often don't really know what they are selling.
The moderators real complaint isn't that AI is inaccurate, but that it is off-message.
Re: (Score:2)
The difference is one is prescribed for pain by doctors and sold by drug companies. The other is self-medicated and sold by street vendors.
In some places, heroin is available on prescription, for when morphine won't cut it. It's usually used after incredibly painful surgeries.
Re: (Score:2)
In some places, heroin is available on prescription
Not in the United States.
Re: (Score:2)
You do realize that Heroin is a brand name from Bayer concerning a distillation of opium, right? Same with Oxycodone. A commercial name for yet another distillation of opium. Same with Dilaudid, Fentanyl, Codeine, etc. They are ALL opium.
Robots don't decieve people, people decieve people (Score:2)
"LOL we didn't mean really do kratom" is going to be catnip to some eager AG with a dead blond.
Re: (Score:2)
I suspect we're about to find out how much liability these outfits can legally disclaim.
Lots. Just look at insurance companies. By explicitly _NOT_ being board certified medical professionals, they can direct health care treatment, at times overriding a patients own physician's decisions and incur no malpractice liability.
Re: (Score:2)
Insurance is one of the most heavily regulated industries around. (This is US-centric, but insurance is heavily regulated in all advanced economies except, arguably, Florida.)
Most folks tend to think Regulated Industry means they can "get away" with less than other companies. And that's true in certain ways. But it also means they can absolutely do things that would l
Reddit mods now have competition /s (Score:2)
This is not primarily an AI problem (Score:5, Insightful)
Yes, AI will produce bad medical advice, but this is not primarily an AI problem. Take a look at Kennedy and HHS with vaccine denial and a bunch of additional gibberish. Take a look at Trump with his Covid cures. Look at the web with its gibberish. Look at friends, coworkers, and the people at the grocery store with their gibberish. Yes, AI will produce gibberish, some of it dangerous. None of this is new.
The real problem is that people are gullible, to AI gibberish and to other gibberish. Medical gibberish, financial gibberish, etc. The people vulnerable to AI gibberish are also those that were vulnerable to the pre-AI gibberish.
Re: (Score:2)
So AI and human expertice are now equal... (Score:2)
on reddit.
Nor human answers (Score:2)
Really? (Score:2)
Why is anyone trusting any AI? (Score:2)
AI is only reliable for entertainment purposes and even then you get wtf moments.
Heroin is one thing, kratom is another (Score:1)
Re: (Score:2)
The biggest problem with kratom is it's an unregulated supplement. You have no idea if the substance you bought is actually kratom or reject matched powder.
Believe what you read on Reddit - and die! (Score:2)
Alternative spelling : Just think of it as Evolution in Action.
(With the caveat that people need to do the Reddit thing before dying without issue, to have maximal eugenic effect.)