Forgot your password?
typodupeerror
AI Google

Google's 'AI Overview' Wrongly Accused a Musician of Being a Sex Offender (www.cbc.ca) 78

An anonymous reader shared this report from the CBC: Cape Breton fiddler Ashley MacIsaac says he may have been defamed by Google after it recently produced an AI-generated summary falsely identifying him as a sex offender. The Juno Award-winning musician said he learned of the online misinformation last week after a First Nation north of Halifax confronted him with the summary and cancelled a concert planned for Dec. 19. "You are being put into a less secure situation because of a media company — that's what defamation is," MacIsaac said in a telephone interview with The Canadian Press, adding he was worried about what might have happened had the erroneous content surfaced while he was trying to cross an international border...

The 50-year-old virtuoso fiddler said he later learned the inaccurate claims were taken from online articles regarding a man in Atlantic Canada with the same last name... [W]hen CBC News reached him by phone on Christmas Eve, he said he'd already received queries from law firms across the country interested in taking it on pro bono.

This discussion has been archived. No new comments can be posted.

Google's 'AI Overview' Wrongly Accused a Musician of Being a Sex Offender

Comments Filter:
  • Lies are the new truth, and it's not even on purpose. Evil companies pump out AI-generated pages by the galaxy-load to attract clicks for ads, so whatever they scrape, they repeat. Google shoves to the top whatever it sees the most. Eventually we will need private, paid-for, human-curated internets.

    • How they do it does not change accountability for doing it. I hope he wins bank!
      • by PPH ( 736903 )

        It's not how. It's why. If they can't prove intent, what sort of judgement do you think he'll get?

        • by bsolar ( 1176767 )

          It's not how. It's why. If they can't prove intent, what sort of judgement do you think he'll get?

          In Canada "lack of intent" is not a defense against defamation. The possible defenses are:

          • Truth: does not apply as the statement in question is agreed to be false.
          • Fair comment: does not apply as the statement is clearly not presented as opinion but fact.
          • Privilege: does not apply as this defense mainly covers public proceedings and this is not one.
          • Responsible communication: this applies if the matter is considered of public interest, but it requires the defendant having exercised responsible diligence, which
          • by PPH ( 736903 )

            Canada a defamation case against Google would be pretty solid.

            Good point. We (here in the USA) have a different right to freedom of speech than the rest of the world does. Google (and other services) may have to consider this even though they are located in the USA. The services they offer are available world wide.

    • Turning lies into "truths" is not a new problem. It goes back centuries.

      I'm reminded of the book Nexus by Yuval Noah Harari, which discusses the consequences of (mis)information handled by the currently-emerging AI technologies. He gives an overview of the history of information-handling in human society, leading up to the present. In short, humans have struggled to find a balance between more central control of information (which helps to maintain order) and more distributed systems (which support self-cor

    • Don't be so melodramatic! Google's AI probably just trained on the latest Epstein files, and "generalized" (I'm generalizing)
      • Implausible. If that were the case, it would have accused the guy of being a ___ ___, and not a sex offender.

  • A: The cost of the lessons!

  • ...google AI overview without verification
    It has a long history of errors

  • To be fair, error made here is one that a human would reasonably make as well.

    So long as it's an honest mistake, I doubt there's much case to be had here. They'll probably have to settle because that's how this usually goes, and there's novelty to the case in that error is made by AI rather than a human. But the error of "two people share the same name, and one of them is X" is indeed reasonable one to make.

    • I think that's easy to fix. Just set the parameter "has_an_army_of_expensive_lawyers" to true for anybody, not just Disney, Musk, Trump, Zuckerberg, Bezos et al, and the AI will start being a bit more careful in what it spouts for people.
    • by gurps_npc ( 621217 ) on Sunday December 28, 2025 @02:43PM (#65886585) Homepage

      No it is not. That error is fine for an individual blogger to make.

      But information services are supposed to have this thing called fact checking.

      The fact their business model forgoes it does not remove their responsibility. If you design a new vehicle that does not have brakes, you do not get a pass on crashes.

      • by Luckyo ( 1726890 )

        Nonsense. Search doesn't "fact check". Search delivers a list of things that fit the search criteria. That's it.

        • In the past, search delivered a list of links and excerpts from web sites that are related to the user-generated query and contained no unique materials generated by the search provider. However, an AI-generated summary contains information gathered by the search provider's algorithms and is published as a statement of fact by the search provider. It may be possible that they have a claim in the ToS that they have no liability for the veracity of that information, but simply making a claim doesn't mean it
          • by Luckyo ( 1726890 )

            >is published as a statement of fact by the search provider.

            This is a self-evident lie. Every google AI embedded summary includes the footnote which clearly states the following:

            >AI responses may include mistakes.

            Followed by a link to this support article.

            Your entire argument hinges on an obvious falsehood.

          • by Luckyo ( 1726890 )

            And formatting fail. This is the support article in question:

            https://support.google.com/web... [google.com]

        • From the article, here is Googles "oopsie" statement:

          Google Canada spokesperson Wendy Manton issued a statement saying Google's "AI overviews" are frequently changing to show what she described as the most "helpful" information. "When issues arise — like if our features misinterpret web content or miss some context — we use those examples to improve our systems, and may take action under our policies."

          So they are saying they might evolve the results.

          This guy is a musician whose public profi

          • by Luckyo ( 1726890 )

            It's a corpospeak copypasta saying "we admit no fault, we improve all the time".

            If you're gleaming any deep meaning from a corpospeak copypasta, you have a problem.

        • by gweihir ( 88907 )

          This is not search though. This is a summary and hence falls under the fact-checking requirement. As Google is about to find out.

          • by Luckyo ( 1726890 )

            As noted above, whatever "requirement" you believe exists is unlikely to exceed "would human make a similar error" standard.

            And in this case, it's self evident that human would (and have many times in the past) make the same error. Even in journalism, where there's an editorial standard, etc.

            • To be clear, they do *not* have the same name. It seems extremely unlikely that a human would've made that mistake. Conflating two people with different names because part of one person's name was mentioned in the vicinity of part of another person's name is not a mistake humans tend to make.

              If they did, it would clearly be gross negligence and very likely result in consequences. As it should here, because "the algorithm did it" has never been, and will never be, a loophole to escape liability.
              • by Luckyo ( 1726890 )

                Are you accusing the victim of lying? From the OP:

                >The 50-year-old virtuoso fiddler said he later learned the inaccurate claims were taken from online articles regarding a man in Atlantic Canada with the same last name

                • by dryeo ( 100693 )

                  And a different first name is implied. Lots of people have my last name and even the wives rare maiden name has at least half a dozen instances in N. America

                  • by Luckyo ( 1726890 )

                    I can understand the confusion with "last names are generally about a very narrow group of people, so it's generally safe to assume they're same people" if you're African or Polynesian.

                    Mediterranean aristocratic style family name arrangement is used for basically everyone else across the world, and has been for at least a century.

    • Whatâ(TM)s interesting here is that as a professional musician, this guy is a public figure and the âoeactual maliceâ standard for defamation applies â" a standard that was designed when defamation could only be done by a human being.

      This requires the defendant to make a defamatory statement either (1) knowing it is untrue or (2) with reckless disregard for the truth.

      Neither condition applies to the LLM itself; it has no conception of truth, only linguistic probability. But the LLM isn

      • by gweihir ( 88907 )

        Google knowingly and recklessly put a mechanism in place that can make statements that are untrue or disregard the truth. I think that qualifies nicely, even if a bit indirectly. If you pay somebody to defame somebody else for you, you are in hot water as well. Same principle.

      • as a professional musician, this guy is a public figure and the actual malice standard for defamation applies

        The standard set by the supreme court (Gertz v Welch) for a private citizen plaintiff to be treated as a public figure in a defamation trial was "pervasive fame or notoriety." It's not clear that this guy meets that standard (though I imagine his relative Jack White might). I hope we can at least agree that most professional musicians are not considered public figures in this sense. They'd have a hell of a time if they were.

    • Re:To be fair (Score:5, Insightful)

      by ArchieBunker ( 132337 ) on Sunday December 28, 2025 @02:45PM (#65886593)

      Notice how AI slop never calls the Sergey Brin or Larry Page a pedophile.

    • Re:To be fair (Score:5, Insightful)

      by dskoll ( 99328 ) on Sunday December 28, 2025 @03:07PM (#65886621) Homepage

      To be fair, error made here is one that a human would reasonably make as well.

      Certainly not. In Canada, at any rate, Ashley MacIsaac is pretty well known and nobody would have made that mistake. Also, any human putting out a statement that "$SOMEONE is a sex offender" had better make sure of their facts first; a modicum of Internet searching on a platform other than Google would have cleared this up.

      • by Luckyo ( 1726890 )

        You're assuming that people are into that specific kind of fame.

        Most people don't have any idea about musicians. It's not in their field of interest, just like most people on slashdot have no idea who the famous paleontologist from their nation is.

    • Maybe. But a human would hopefully double-check the information before making public accusations of this magnitude.

      • by Luckyo ( 1726890 )

        First time on the internet I see. Welcome.

        • True, and yet a human would likely still be liable for making the false statement and a defense of "I heard it somewhere" will likely not get the person very far in court.
    • by Tablizer ( 95088 )

      If a human goes around accusing one of being a sex offender, they can be sued for defamation. (There are certain exceptions in the US for accusations against celebrities and politicians.)

  • Is it possible the LLM was more likely to hallucinate that accusation because the word for the musicianâ(TM)s instrument is also used in a slang term for the accusation?
    • Serious answer: No?

      Longer answer: I think even an LLM ought to know the difference between the slang name for a musical instrument (or the act of playing it) and a sexually-deviant activity. Oh, and TFA explains that the actual sex-offender had a name similar to the fiddler.

      • by gweihir ( 88907 )

        I think even an LLM ought to know

        Lets break that down: First, LLMs do not "know" things. They are not knowledge engines. And, second, you clearly do not "think" here.

        • I am pretty sure you are a human. I have seen many of your posts here, and often find myself agreeing with you. And I'm also pretty sure we both know how to think.

          And that's why I'm confident in assuming you know what it means to speak figuratively. Which is what I was doing.

    • Is it possible the LLM was more likely to hallucinate that accusation because the word for the musicianâ(TM)s instrument is also used in a slang term for the accusation?

      The LLM didn't hallucinate the accusation. It was a case of mistaken identity, per the summary. The accusation was real, just for a different person of the same name.

    • by allo ( 1728082 )

      In principle yes. There are two main points for that: The word embeddings group similar word together, like having cat, dog, bird closer than cat, chair, winter in an text embedding that allows to calculate similarities by meaning (instead of e.g. alphabetical order). The second point is the (self)attention, that weights existing words in the output with the to be generated words.
      In practice this is unlikely and even smaller LLM won't get confused by that. Many also understand these things well enough to co

  • ...proving malicious intent.

  • Consequences? LOL! (Score:4, Insightful)

    by dskoll ( 99328 ) on Sunday December 28, 2025 @03:04PM (#65886611) Homepage

    Of course there will be no meaningful consequences for Google. Doing something that would ruin the average shmo is just swatted away as an annoyance by the oligarchs.

    • by Tablizer ( 95088 )

      I hope this guy can successfully sue the royal pants off Google. Google won't change unless they get a big juicy metal boot to their wallet.

  • An LLM cannot defame someone. And unless you can prove that Google setup their LLM specifically to defame the artist, there isn't a case to be had. If anyone should be sued it's the venue for canceling based off of faulty and unproven information.

    • An LLM cannot defame someone.

      That depends on whether you grant the LLM any agency. It can certainly output something that is defamatory. Which in fact happened in this case.

      And unless you can prove that Google setup their LLM specifically to defame the artist, there isn't a case to be had.

      I don't think the bar is that high. If you can prove that Google failed to take adequate precautions against their LLM doing something like this, then it seems to me that you can base your suit on negligence.

      If anyone should be sued it's the venue for canceling based off of faulty and unproven information.

      The venue (the Sipekne'katik First Nation) already apologized to MacIsaac quite sincerely, and it appears MacIsaac accepted the apology, because he'd still like

      • by Bahbus ( 1180627 )

        That depends on whether you grant the LLM any agency. It can certainly output something that is defamatory. Which in fact happened in this case.

        Eh, it can output material that can be used to defame someone, but it falls short. You need 4 things:
        -False Statement, which they have.
        -Published/Communicated, which, yeah.
        -Fault, this is where it's unlikely. Depending on whether he is counted as a "public figure" or not, they need either proof of malice or proof of negligence. As much as people would like to believe this is Google being negligent, I'm more than positive they've worked really hard to try and prevent cases like this from happening. And there

        • You clipped the part of my post where I address potential negligence on Google's part, and then you appear to raise the issue as though I hadn't. Here's a reminder of what I posted:

          If you can prove that Google failed to take adequate precautions against their LLM doing something like this, then it seems to me that you can base your suit on negligence.

          You are confident that Google "worked really hard" to keep this from happening. And yet it did. So, it appears to me that Google's precautions were inadequate.

          Even if Google made a good-faith effort to keep their LMM from making defamatory statements, the courts could still find them negligent. Because their efforts weren't enoug

          • by Bahbus ( 1180627 )

            I was acknowledging your address of it with my questions-. I just didn't feel like breaking the continuity of my post to quote different portions of yours.

            And yet it did. So, it appears to me that Google's precautions were inadequate.

            And probably will again, despite whatever fixes I'm sure they already implemented immediately after finding out about this. It's not whether or not the precautions were adequate to prevent the occurrence. It's about whether they were reasonable precautions. And, like I said, given this hasn't happened more often or with bigger, actually well-known celebri

    • by gweihir ( 88907 )

      If you set up and run a machine that defames somebody, you are 100% liable. Seriously. This does not even need computers involved.

      • by Bahbus ( 1180627 )

        If you set up and run a machine that defames somebody, you are 100% liable.

        Sure, if I set it up negligently or to purposely defame someone. Can you confidently say Google took no precautions against this sort of thing? I can't. If they didn't take any precautions, this would happen way more often and with bigger, real celebrities. And, if the defamed person rises to "public figure" status, then you'd need to prove that I did it maliciously. I'm not sure if MacIsaac counts as a "public figure", but if they do...do you honestly think Google maliciously targeted a random minor Canadi

        • by gweihir ( 88907 )

          This is a professional product, so the standards for simple negligence applies. Google is quite guilty of that, because their AI tools have done it before. All it needs is that their thing did it (it clearly did) and that they did not have adequate safeguards in place (they clearly did not). And 3 years in this AI hype they cannot even argue that this was surprising behavior, which is basically the only defense they have left.

          • by Bahbus ( 1180627 )

            This is a professional product, so the standards for simple negligence applies.

            Which doesn't matter if MacIsaac counts as a "public figure", because they'll need proof of malice instead.

            Google is quite guilty of that, because their AI tools have done it before.

            Prior guilt does not prove current guilt.

            did not have adequate safeguards in place

            Adequate doesn't matter. It's whether or not reasonable safeguards are in place, which, you have zero factual intel about because you haven't seen the source code.

            And 3 years in this AI hype they cannot even argue that this was surprising behavior, which is basically the only defense they have left.

            There are probably plenty of other defenses beyond arguing fault. Could easily argue that the output of Google's AI Overview counts as neither a form of publication nor communication. If that doesn't f

            • by gweihir ( 88907 )

              Google is quite guilty of that, because their AI tools have done it before.

              Prior guilt does not prove current guilt.

              What is it with you mindless, insightless idiots? Obviously I was pointing out that Google _knows_ their machine can do it because it has done it before. Are you completely dumb or just a massive Stockholm syndrome sufferer? At least make a MINIMAL attempt.

              • by Bahbus ( 1180627 )

                Obviously I was pointing out that Google _knows_ their machine can do it because it has done it before.

                This statement is useless in a court of law and doesn't mean anything.

                just a massive Stockholm syndrome sufferer?

                Stockholm Syndrome isn't a real thing, retard.

  • ... please form an orderly line on the right.

    Google is already been sued for this very thing. [bloomberglaw.com].

  • You can stop speculating how the AI failed. The key quote of the article is: "The 50-year-old virtuoso fiddler said he later learned the inaccurate claims were taken from online articles regarding a man in Atlantic Canada with the same last name."

    Looks like most information was right (or as right as the claims against his double are), but were linked with the wrong person with that name. I think such things happened with Google's knowledge graph way before LLMs were a thing. I also read some time ago an int

    • by MikeKD ( 549924 )

      You can stop speculating how the AI failed. The key quote of the article is: "The 50-year-old virtuoso fiddler said he later learned the inaccurate claims were taken from online articles regarding a man in Atlantic Canada with the same last name."

      Looks like most information was right (or as right as the claims against his double are), but were linked with the wrong person with that name. I think such things happened with Google's knowledge graph way before LLMs were a thing. I also read some time ago an interesting article about someone sharing name and birth day (I don't remember, but possibly even birth city) with another person and getting confused over that all the time even by authorities. If that happens to you, you can only hope your double won't be a criminal.

      Your apologia for the defaming multi-billion dollar company is wrong: they share only a last name.

      • by allo ( 1728082 )

        I am the last one to defend Google. I still like reporting that doesn't hide the key issue. In particular, when the original article states it that clearly. Let's be fair, even to the unsympathetic tech giants. And if you read what I quoted (and what you quoted from my post), it mentions that they share the last name. You don't even need to read the full article, it is already in the caption of the image on top of it, it's really hard to miss.

  • wrong kind of fiddler
  • That the indians made a pretty dumb move by not checking facts themselves.

Usage: fortune -P [-f] -a [xsz] Q: file [rKe9] -v6[+] file1 ...

Working...