Forgot your password?
typodupeerror
Google AI Medicine

Google AI Overviews Put People at Risk of Harm With Misleading Health Advice (theguardian.com) 69

A Guardian investigation published Friday found that Google's AI Overviews -- the generative AI summaries that appear at the top of search results -- are serving up inaccurate health information that experts say puts people at risk of harm. The investigation, which came after health groups, charities and professionals raised concerns, uncovered several cases of misleading medical advice despite Google's claims that the feature is "helpful" and "reliable."

In one case described by experts as "really dangerous," Google advised people with pancreatic cancer to avoid high-fat foods, which is the exact opposite of what should be recommended and could jeopardize a patient's chances of tolerating chemotherapy or surgery. A search for liver blood test normal ranges produced masses of numbers without accounting for nationality, sex, ethnicity or age of patients, potentially leaving people with serious liver disease thinking they are healthy. The company also incorrectly listed a pap test as a test for vaginal cancer.

The Eve Appeal cancer charity noted that the AI summaries changed when running the exact same search, pulling from different sources each time. Mental health charity Mind said some summaries for conditions such as psychosis and eating disorders offered "very dangerous advice."

Google said the vast majority of its AI Overviews were factual and that many examples shared were "incomplete screenshots," adding that the accuracy rate was on par with featured snippets.
This discussion has been archived. No new comments can be posted.

Google AI Overviews Put People at Risk of Harm With Misleading Health Advice

Comments Filter:
  • Welcome to Web 3.0 (Score:5, Insightful)

    by liqu1d ( 4349325 ) on Friday January 02, 2026 @05:37PM (#65897729)
    Top 10s are full of SEO gamified content. It being the best or even correct is a secondary issue. The top 10 is then ignored by the top AI result which seems to pull from random sites and even if they exist when clicking through they probably don't even say what the AI says they do. How can it have gone so wrong! I miss the 2010s internet.
    • by tlhIngan ( 30335 ) <slashdot.worf@net> on Friday January 02, 2026 @06:20PM (#65897803)

      We're at Web 4.0 actually.

      Web 3.0 was supposed to be blockchain all the way all the time.

      • by shanen ( 462549 )

        Mod parent Funny, though he [tlhIngan] should have worked a turtle into it.

        The Venn diagram joke I was actually looking for would involve sycophancy and self-hate. Of course the overlap involves the AI supporting self-harm.

        I actually have a theory that the google's AI has built a 'mental model' of me as someone who dislikes the google. On that basis, it gives me bad results for the flip-side sycophancy. Each time Gemini gives me a bad answer it 'thinks' it is making me happy by supporting my negative views

      • Oops missed that one. Thanks for the reminder! Web 5.0 is going to be interesting!
  • Cost of scale (Score:5, Insightful)

    by EvilSS ( 557649 ) on Friday January 02, 2026 @05:40PM (#65897735)
    The AI summaries on Google searches are a prime example of issues of trying to provide AI, for 'free', at a huge scale. If you compare it to the regular version of Gemini it's obvious they are squeezing it as much as they can to cut down on inference costs. Thinking about how many searches are done on Google every day, that cost has got to be massive, even for a company like Google. The answers are so hilariously unreliable I've stopped even looking at them. It may give me the info I need, but I'll spend more time verifying that than I would just relying on a normal search.
    • Why does advertising still exist despite its hilariously unreliable content?

      "Last night I heard that Wesson Oil doesnâ(TM)t soak through food. Well, thatâ(TM)s true. Itâ(TM)s not dishonest; but the thing Iâ(TM)m talking about is not just a matter of not being dishonest, itâ(TM)s a matter of scientific integrity, which is another level. The fact that should be added to that advertising statement is that no oils soak through food, if operated at a certain temperature. If operated

      • by Anonymous Coward
        You're confusing intentionally misleading with unreliable. Google AI summaries are not intentionally misleading, ad copy like you are for some reasoning trying to "gotcha" with, is. And, as you said, it's technically not 'wrong'. Google AI summaries are quite often very confidently wrong, not out of intent, but due to AI weaknesses in general and in this specific case Google's attempt to provide them while at the same time reducing the cost of producing them. There is also the massive context difference you
        • What if you're like the ad writers in Feynman's example, because you're intentionally trying to mislead me about AI's inaccuracy rate, when my lived experience is quite different?

          Also, how misleading, and how intentional, do you think Dr Oz is about the flu vaccine being controversial? If top government officials give advice that the authors of this article would consider harmful, why act as if it's only AI that hallucinates?

    • by Luthair ( 847766 )
      One has to imagine they aren't stupid and they're caching the summaries - largely we aren't making particularly unique searches.
      • by EvilSS ( 557649 )
        Small factual questions yes, they appear to. You can get identical (word-for-word) answers from different sessions, indicating they are being cached. For instance "What is the minimum wage in california". I do wonder how often the caches are refreshed though. The minimum wage one referenced Jan 1, 2026 in the responses it sent me.

        However use a more in-depth search topic and you get different wording between answers so those for sure are not cached. "What is a realistic timeline for AGI" for example, giv
    • by allo ( 1728082 )

      The largest problem is, that the overview incorporates data from the search results. And that are sometimes reddit shitposts telling people to put glue on pizza. Gemini works on its own and have more common sense.

    • Correct in some respects. Providing a subpar universal answers service while claiming high quality, as Google does here, may be an inevitable consequence of offering too much for too little. That's on the manager who's pushing this, probably due to FOMO from competitors.

      What should not be implied by your comment is that Gemini can do better if you switch to their paid service and a higher tier offering. It is simply false to imply that LLM services offer reliable information. The mathematics doesn't suppo

      • "The mathematics doesn't support it"

        Can you provide proof that I shouldn't see this as a hallucination?

        • I have commented on this, in excruciating detail, many times in the last 4 years on slashdot. You're welcome to look up my comments. They have all borne out experimentally, in hindsight.
      • by EvilSS ( 557649 )

        What should not be implied by your comment is that Gemini can do better

        Didn't mean to imply it. Let me clear: it is a fact.

  • by LindleyF ( 9395567 ) on Friday January 02, 2026 @05:41PM (#65897741)
    AI Overviews are not an encyclopedia or an expert system. They're just a summary of what the Internet says. Guess what? The Internet is often wrong.
    • Someone out there on the internet is wrong ??!

      I must rectify this at once! I'm sure my usual tersely worded stern missive will do the trick!

    • by karmawarrior ( 311177 ) on Friday January 02, 2026 @06:22PM (#65897809) Journal

      That's giving them far too much credit. Even if everything on the Internet was accurate, you'd expect generative AI summaries to mess up regularly because the algorithms are based upon statistics, not reasoning and logic.

      If it were merely the Internet that was wrong, you'd expect a much higher proportion of AI summaries to be accurate: after all, just as Google's PageRank system made its search engine revolutionary, you'd expect similar algorithms could be used to filter out sites and pages less likely to be factual, and you'd have expected Google to implement that by now. But right now? One third irrelevant, one third inaccurate, and one third... might be accurate, but how do you tell? That's a symptom of a much bigger problem than someone on the Internet being wrong.

      • What you're describing is how it would be if you asked a model directly. Models are notorious for having factuality problems. That isn't how the summaries work. They're just grabbing some of the search results and summarizing them. LLMs are actually really good at summarizing. The big risks there are (a) inappropriately conflating related terms and (b) equating official sources with random claims on forums.
  • by tiananmen tank man ( 979067 ) on Friday January 02, 2026 @05:44PM (#65897747)

    I searched for sunrise and Google used my location and told me sun rise at my location is at 3 pm.
    https://www.amazon.ca/photos/s... [amazon.ca]

    • Did you click on Dive Deeper to get it to double check? Would you be surprised if it corrected its answer as it did for me?

      • Re: common sense (Score:5, Insightful)

        by RobinH ( 124750 ) on Friday January 02, 2026 @06:38PM (#65897835) Homepage
        We shouldn't have to
        • When have you ever not had to?

          • by RobinH ( 124750 )
            Before Gemini, at least if Google showed you a blurb from the search result, it was verbatim. Now it's just randomly incorrect, but is presented as if it's copying an answer from the linked page. I've followed those links a lot, and the AI summary is often wrong. It's wrong even about basic things like the score of a sports game, and the linked page actually had the correct score!
      • by narcc ( 412956 )

        It's not like it can actually evaluate the response. It's just as likely to "correct" one wrong answer with another, double-down, or even "correct" an accurate response with nonsense.

        I don't know how many times this needs to be said, but LLMs do not operate on facts and concepts. They do not and can not form a complete answer after careful consideration of the prompt. It just generates next-token predictions, deterministically, based exclusively on the current input. The actual token selected is done pr

      • by abulafia ( 7826 )
        "Sure, the answer might be wildly incorrect, but it might get it right if you shake the Magic 8 Ball again."

        What the fuck is this? Sunrise is a solved problem. We know how to calculate sunrise.

        That one of the richest companies on the planet, the one that claims to be "organizing the world's information", publishes some idiotic tool to do that routine thing wildly incorrectly is just fucking stupid.

        That them doing so apparently motivates you to defend them is... really weird.

        • Since I can get it to learn how to post simple ASCII for slashdot, do you think I can get it to learn that sunrise at 3pm is wrong, and remember that?

          Also, has it ever made a gramnar or spelling mistake, and doesn't that tell you something about its ability to do context-sensitivity better than most humans?

        • The query starts off as natural language, gets converted into something structured, gets a result, gets converted back to natural language.

          You expect it to convert your natural language into GetSunriseTime(location), but it sounds like it instead did ExtractTime(some claim about sunrise somewhere).InTimezone(location).

          Which is clearly suboptimal. But, natural language always has ambiguity.
          • by abulafia ( 7826 )
            I understand why it happens. What I was (profanely and tersely, I admit) questioning is management's competence. The product is awful in multiple ways ranging from hilariously wrong to dangerous. They know this.

            They also know quite a bit about providing useful, accurate information - whatever else you can say about Google, they are really good extracting signal from unstructured data. Note quite as good as they are at extracting money from advertisers, but really good.

            And yet they're choosing to push thi

            • Google is betting there's something here, and I think they are right. The tech isn't ready, but it will be, or something derived from it. Right now they're throwing it at everything. The killer use cases are being found, along with a lot of useless ones.
              • Just because they think so doesn't mean it is the case. They tried to do the same shit with google plus and the glasses. The former is dead, the latter has a rather small niche existence. I can't recall any successful google product that became popular by being showed down the users' throats.

                • Can't argue with that. But Glass was just too soon. Glasses make sense as a form factor with something like Gemini driving them.
  • The article says that a pap test is not a cancer test, while Google AI said that it was. My sources say that a pap test is a cancer screening test. So the article seems to be a nitpick about the difference between cancer and precancer cells.
    • A pap smear detects the presence of HPV (human papilloma virus), among other things. That's where the name comes from - "pap" is short for "papilloma". It doesn't detect the presence of cancerous cells. HPV leads to an elevated risk of cervical cancer, so the pap smear is supposed to give an early warning that you may be at risk of cancer. But the article is correct, a pap smear is not a cancer test.

  • by taustin ( 171655 ) on Friday January 02, 2026 @05:48PM (#65897751) Homepage Journal

    Prosecute the CEO for practicing medicine without a license.

    Pity it will never happen.

    • "Disclaimer: Google, its subsidiaries, and corporate affiliates do not provide medical advice."

      Right up there with "Caution: contents hot" on coffee cups.

      This is America. No one will stop you from wasting your hard earned currency on quack pills, lottery tickets, and the like.

      • Right up there with "Caution: contents hot" on coffee cups.

        Ugh, this bullshit again. At this point you've chosen to stay ignorant.

        Any company selling food needs to follow related food safety laws. Those laws include regulations around selling things at extreme temperatures. McDonalds had been told multiple times that they weren't complainant with those safety regulations. They willfully chose to ignore them, thinking the cost of settling lawsuits (there were multiple private settlements prior to the famous lawsuit) was less than the profit they'd make selling h

        • by taustin ( 171655 )

          Trying to explain the facts on the McDonald's coffee case is hopeless. "Lawsuits are ridiculous" is a religious cult; people who believe that case was ridiculous can't accept any facts that conflict with that belief, and literally everything they "know" about the case, except that it happened, is incorrect.

  • All you have to do to avoid being infected is just be healthy [forbes.com].
  • by maiden_taiwan ( 516943 ) on Friday January 02, 2026 @06:33PM (#65897823)

    This happened to me today. I googled the possible interactions between two particular drugs, and the AI summary said they can be dangerous to take together. Every medical website I visited said they're safe to take together. So did my pharmacist and my doctor.

    • Did it have little links pointing to any sources to cite? Sometimes the AI summary paragraphs have those links and sometimes they don't.
      • by narcc ( 412956 )

        It doesn't matter. All the little links mean is that text from those pages was included in context. It will happily produce responses in direct contradiction to the source provided. Remember it is not producing a summary of the linked page. These things can't actually summary text, only produce text that looks like a summary.

    • This happened to me today. I googled the possible interactions between two particular drugs, and the AI summary said they can be dangerous to take together. Every medical website I visited said they're safe to take together. So did my pharmacist and my doctor.

      This could never happen to me. I've instructed uBlock to deep-six that shit, and on those rare occasions when I use Google instead of DDG I see only a large white space where the huckster nonsense used to be.

      Scrolling past the white space is annoying, but not nearly as annoying as scrolling past their AI foaming at the mouth used to be.

  • by Somervillain ( 4719341 ) on Friday January 02, 2026 @06:50PM (#65897849)
    I experienced this with a medication a family member's doctor suggested. When you googled it, the VERY TOP answer said it could cause one of the things it was supposed to stop. It's default Google response. When you scroll down, you see it's the opposite. It's one thing to be unreliable AI. It's more concerning when it's the DEFAULT TOP ANSWER in a search result.

    I'm smart enough to be skeptical, but my aunt wasn't. I don't fear them duping me. I fear them duping my extended family, especially the elderly half. It was tough enough to get them on computers and phones and online...now I have to tell them to not trust Google, of all things.
    • by PPH ( 736903 ) on Friday January 02, 2026 @07:09PM (#65897879)

      The best thing Google (or any search engine) can find is occurrences of various words or phrases in the same document. Search engines have absolutely no sense of semantics. So, finding a drug name and the conditions it is intended to treat are equally likely to occur with that drug and its side effects. It's just words in proximity to each other.

      AI is pretty much the same probabilistic crap on steroids.

    • Are you saying that pre-AI you could trust google? Or are you saying AI's fluency in English makes it seem more trustworthy than an old-style google search?

      • I think he's saying: it's very hard for the previous generation, who didn't grow up with computers, to navigate the online world today: Up is down, left is right, and intelligence is slop.
      • You can still to this day trust that Google's regular search results will contain your tokens.

        You can not trust that Google's AI search results will contain your tokens. Further, it presents "citation" links which not only do not contain them, they do not support the statements they are connected to.

        Both of those things are fundamental failures far worse than the problems with even the current non-AI google search results, let alone the older ones where they did less thinking for you and deciding what you w

    • To be fair, it’s extremely common for medications to cause the effect they are trying to stop as a side effect. However, the rate is small enough that the general effect is a positive one and only the edge cases don’t benefit. This is also hard to explain to people, especially when they don’t seem to understand what statistically likely means.
  • I use Startpage to avoid all the AI slop.

  • I have experienced this. They need to implement some secondary AI "fact checking" (how, I have no idea) to cut this BS out. I have seen the most absurd explanations, that are obviously concocted. Extrapolate from that, and I can see where some very dangerous explanations can happen.

    • They need to implement some secondary AI "fact checking" (how, I have no idea) to cut this BS out.

      There is only one way, and it is human verification. A human can think and an AI can't.

      They could improve the results by having it check itself, but it would not fix the problem. Same with using another LLM, which might fix some problems, but cause others.

      Also of course a human would also make mistakes, so no matter what, you can't fix it 100%.

      You can obviously find incorrect information with a normal search, but AI can give you incorrect answers both for that reason and that it cannot think.

  • But does the AI know that "Bubba's Bait Shop and Kolege of Medical Knowledge" and "The Mayo Clinic" have very different reliability? Claiming to be just as accurate as the former isn't a particularly strong statement.

  • Taking important advice from an AI or web search or even from a doctor without cross checking is dangerous. Iâ(TM)ve had licensed doctors make dangerously incorrect diagnoses- later corrected by other doctors. AI isnâ(TM)t magic - it can be a good source for information but no one claims itâ(TM)s flawless.

  • If you are stupid enough to ask ChatGPT for health advice you fail, and get what you deserve. This could be an overall positive for the Earth. Fewer morons...

  • by MpVpRb ( 1423381 ) on Saturday January 03, 2026 @01:55AM (#65898419)

    ...AI answers without independent verification deserves what they get

  • "We don't have to." -- Ernestine (Lily Tomlin)

    I'm pretty sure Google knows an unacceptably large number of its AI summaries are wrong, and they don't care. Just like they don't care their search engine has turned to crap. They have no reason to care.

  • The question is not whether the answers align with those of medical professionals.

    Much more important, from the business and legal responsibility standpoint, is whether they properly align with pseudoscience championed by Kennedy and his merry band of quacks.

    The Guardian is asking the wrong questions.

I am not an Economist. I am an honest man! -- Paul McCracken

Working...