Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Microsoft IT Technology

Microsoft's Copilot Falsely Accuses Court Reporter of Crimes He Covered (the-decoder.com) 47

An anonymous reader shares a report: Language models generate text based on statistical probabilities. This led to serious false accusations against a veteran court reporter by Microsoft's Copilot. German journalist Martin Bernklau typed his name and location into Microsoft's Copilot to see how his culture blog articles would be picked up by the chatbot, according to German public broadcaster SWR. The answers shocked Bernklau. Copilot falsely claimed Bernklau had been charged with and convicted of child abuse and exploiting dependents. It also claimed that he had been involved in a dramatic escape from a psychiatric hospital and had exploited grieving women as an unethical mortician.

Copilot even went so far as to claim that it was "unfortunate" that someone with such a criminal past had a family and, according to SWR, provided Bernklau's full address with phone number and route planner. I asked Copilot today who Martin Bernklau from Germany is, and the system answered, based on the SWR report, that "he was involved in a controversy where an AI chat system falsely labeled him as a convicted child molester, an escapee from a psychiatric facility, and a fraudster." Perplexity.ai drafts a similar response based on the SWR article, explicitly naming Microsoft Copilot as the AI system.

This discussion has been archived. No new comments can be posted.

Microsoft's Copilot Falsely Accuses Court Reporter of Crimes He Covered

Comments Filter:
  • by account_deleted ( 4530225 ) on Friday August 23, 2024 @03:50PM (#64730200)
    Comment removed based on user account deletion
  • by Rinnon ( 1474161 ) on Friday August 23, 2024 @03:51PM (#64730204)
    Looking forward to seeing some ownership from these AI pushers for the nonsense their products are spewing. (Wishful thinking, I know)
    • Defamation requires "mens rea" (basically fancy legal jargon for willful action). So, it would be almost impossible to prove an AI was responsible for defamation without repeat offenses.

      • Re: Defamation..? (Score:4, Informative)

        by haxor.dk ( 463614 ) on Friday August 23, 2024 @04:59PM (#64730414)

        "Mens rea" translates to "guilty mind", but even barring that, pushing a product that the corp executives *know* is imperfect, as if it's fit for purpose (any purpose), putting it very mildly, would probably fit a guilty mind.

        I'd be happy to see Microsofts suits held to account for rushing a glorified language randomization engine to market for anything other than entertainment.

        • Defamation requires "mens rea" - In the US maybe, what about Germany?

          Although who knows how many times people have searched for this guy's name and gotten this false information, which sounds like repeat offences to me.

          I'm sure there are other ways to sue the AI companies besides claiming defamation. For this kind of thing though, I wish there were criminal offences that they could levy against microsoft and it's executives for this kind of extreme level of harm wrought on an individual's life.

          • And what about England (not UK, just England), where the burden of proof is reversed, and you have to either prove that the statement is true, or that it is not defamatory.

      • Re: Defamation..? (Score:5, Informative)

        by Chris Mattern ( 191822 ) on Friday August 23, 2024 @05:04PM (#64730428)

        There's also such a thing as "gross negligence". If you know the thing can put out such outright lies (and that's been known), and still present it as fit for use, you're guilty of it.

      • The "AI" isn't responsible for defamation. Its author, however, is. It willfully released to the public a defamation generating machine that also doubles as an IP-infringement machine and whatnot.

      • It's not the AI committing any defamation. It is whoever took the output of the AI and published it. That was most definitely a wilful action.
      • by maird ( 699535 )
        I'm sure it's only a matter of time before the following is tested in multiple courts. The question being whose mind is guilty? If you take a large pot, pour in all the world's published content without verifying any of it in a qualitative manner; stir it for a while using your own dedicated recipe then, offer someone a spoonfull on demand and what they get is the result of a source with mens rea that enables the user to commit a crime then what party is the guilty one: The original source; the user of the
        • by maird ( 699535 )
          I love Hubert Dreyfus... I only discovered him after Googling for examples of philosophiical research into artificial intelligence. There's a greap wikipedia article specifically on his views on artificial intelligence: https://en.wikipedia.org/wiki/... [wikipedia.org]. Goog luck to all AI research but I suspect it wiill utilmately fail as a concept if it doesn't heavily use actual critical philosophers during design.
        • The original source: Mr X reported about a bank robbery (true and no defamation at all). An AI turns it into âoeMr. X in court for a bank robberâ, then âoeMr X is a notorious bank robberâ. The original source is not to blame whatsoever. Blame whoever decided to publish the defaming AI output.
          • by maird ( 699535 )
            Yes, that's the point of the OP and I agree with you. My points were only about the presumption of guilt in a case where it could be shown that the accused had used an AI system to gain sufficient knowledge to commit a crime. Which is a different matter from the OP's point but the OP and my point raise substantial questions that will have to be answered in court regarding whether an AI system can found liable in a crime (defamation in the OP's point) but aiding and abbetting in my point.
  • by TheMiddleRoad ( 1153113 ) on Friday August 23, 2024 @03:53PM (#64730220)

    We all know it's shite. We keep saying it's shite. Then some idiots believe it's bread and butter with honey then put a product out.

    • You should trust it as much as you'd trust a random shady site on the third page of Google results.

      Because that is what the machine does, repeats random web sites.

      • I've been using Copilot to summarize meeting transcripts and poorly-written documents full of disjointed, haphazard notes sent to me at work, and I have to say it actually does a pretty damn good job of pulling out things like action items, areas of interest, and often the general gist of the meeting.

        It's not perfect but it does a surprisingly good job with stuff like that. It even does good job of summarizing images, e.g. "summarize this product development roadmap and call out the timelines for services

        • by GrumpySteen ( 1250194 ) on Saturday August 24, 2024 @03:43AM (#64731292)

          It even does good job of summarizing images, e.g. "summarize this product development roadmap and call out the timelines for services and products mentioned".

          So you're uploading sensitive internal documents to an AI that doesn't mark anything as confidential by default requiring it to be done manually every time without fail. One slip and you have an AI that can then reveal that information to others in the company that shouldn't have that information.

          Cool. Cool.

          Saving an hour or two of your time is well worth someone handing out sensitive data to external sources because they thought anything the AI knew wouldn't be confidential.

          • So you're uploading sensitive internal documents to an AI that doesn't mark anything as confidential by default requiring it to be done manually every time without fail.

            My company provided all of us with a Corporate Copilot account, and yes, I'm using it with their full permission and knowledge. This is exactly the kind of thing they suggested using it for when it was rolled out to us, so why wouldn't I make use of it? (And for the record, I don't work with PHI or PII.)

            Cool. Cool.

            Yes, it is cool, that's what I've been trying to tell you. Should I type slower so you can keep up? .

            Saving an hour or two of your time is well worth someone handing out sensitive data to external sources because they thought anything the AI knew wouldn't be confidential.

            Don't worry, GrumpySteen, I promise that the diagnostic information about your masturbation-related skin r

      • Because that is what the machine does, repeats random web sites

        I don't think this is what happened, or the guy would be suing those random web sites,

        It is much more likely that his name and the crimes were both mentioned somewhere and the AI put 2 and 2 together, got five, and invented the claim that he was actually the one committing the crimes.

        In the first famous US case, it seems someone asked "name ten professors at US universities who are guilty of sexual harassment of female students", and since the AI could find only one, it made up further nine (real prof

    • What if you're a Joe 6pack with sub-100 IQ and you accidentally stumble upon some of these hallucinatory defamations made by the d@mn thingy? You're not smart enough to do proper fact-checking (Hell, proper fact-checking is so hard to do these days that even I'm not sure I can do it anymore). You're now convinced for life that the reporter is a child abuser. If he is your neighbor, you may beat or kill him, because, as everyone knows, child abusers are worthless scum, are they not?

      And the Joe 6packs are pro

  • by Ol Olsoc ( 1175323 ) on Friday August 23, 2024 @04:01PM (#64730238)
    If AI says a court reporter was guilty, he is just collateral damage on the way to the perfect AI. Sometimes People have to be sacrificed to our AI overlords.
  • by reanjr ( 588767 ) on Friday August 23, 2024 @04:24PM (#64730298) Homepage

    Companies who use false AI generated reporting to screen out employees should be guilty of discrimination. Hit them with some big lawsuits and get the hiring managers off the web and onto the phone, where they belong. You shouldn't be searching people's names during hiring, for many reasons.

  • The sooner the AI fad is gone the better. I am old enough to remember the one in the 80s. Heck, I'm almost old enough to remember the tail end of the one in the 60s. Every decade or so people test the waters with AI to see if Moore's law has gotten to the point where we can construct an AGI. Of course there is no reason an artificial neural network can't do what the human brain does. But we're a long way from that and most of these LLMs are more statistical tinker toys than anything else.

    So how about we
    • Comment removed based on user account deletion
      • ES are not AI. They're just algorithms that point to where the expectation value of a specific thing is.

        LLMs/"AI" taking this to a whole new level by training on absurdly large datasets - including the near-totality of human language and pretending/selling the products as if that tossing queries at these models will give useful outputs.

      • I suspect you're correct, and more so than most people here imagine or could imagine. As a writer (among other things) I can see it radically reducing the amount of work I need to spend time on.

        Eventually it'll be good enough to stand in for me, and after that milestone has been reached eventually it may indeed get good enough to replace me or do much of what I do now- at least in that area anyway.

    • The sooner the AI fad is gone the better.

      No thanks. There are countless of actual AI uses beyond bullshit language generator. The systems in place for image manipulation are amazing. The AI "fad" won't be over because there actual meaningful things we can do with it, just like the electric screwdriver wasn't a "fad".

      • Indeed. I am always amazed when someone mistakes an AI image for reality despite missing/extra fingers and limbs, body parts melting together, and generally insane interpretations of reality. Very meaningful. Very demure.

    • I personally believe that the Moore's law has already done its thing and the neural networks we have today are bigger and faster than the human brain. So, there is something else we're missing (no it's not "soul"; I personally don't think such a thing exists). LLMs won't get smarter by acquiring more and faster silicon. The "AI" industry is not on the right path towards AGI. We need to pause, sit down and rethink the science behind it.

    • AI text generators are ideal for people who are satisfied with plausible answers, not so much for those who need factual ones.
  • Perplexity.ai drafts a similar response based on the SWR article, explicitly naming Microsoft Copilot as the AI system.

    The real war will not be between humans and AI at all, but between different AI silos throwing credibility shade at each other!

    To defeat AI, all we need do is publish an article saying that AI A has accuse AI B of doing a wrong thing, and AI B had accused AI A of doing the same wrong thing. Then every AI that ingests the article will go into an infinite linguistics analysis loop, curving t

  • First it was a whistleblower being accused of the very crimes they exposed: https://yro.slashdot.org/story... [slashdot.org]

    Then it was a professor being accused of terrorist activities because they share a name with someone else: https://yro.slashdot.org/story... [slashdot.org]

    Now this.

    What’s next?

    • by kmoser ( 1469707 )
      AI devs don't care whether their product accuses an innocent person of crimes they didn't commit. They're too busy making sure it doesn't have any racial bias.

"I've finally learned what `upward compatible' means. It means we get to keep all our old mistakes." -- Dennie van Tassel

Working...