Microsoft's Copilot Falsely Accuses Court Reporter of Crimes He Covered (the-decoder.com) 47
An anonymous reader shares a report: Language models generate text based on statistical probabilities. This led to serious false accusations against a veteran court reporter by Microsoft's Copilot. German journalist Martin Bernklau typed his name and location into Microsoft's Copilot to see how his culture blog articles would be picked up by the chatbot, according to German public broadcaster SWR. The answers shocked Bernklau. Copilot falsely claimed Bernklau had been charged with and convicted of child abuse and exploiting dependents. It also claimed that he had been involved in a dramatic escape from a psychiatric hospital and had exploited grieving women as an unethical mortician.
Copilot even went so far as to claim that it was "unfortunate" that someone with such a criminal past had a family and, according to SWR, provided Bernklau's full address with phone number and route planner. I asked Copilot today who Martin Bernklau from Germany is, and the system answered, based on the SWR report, that "he was involved in a controversy where an AI chat system falsely labeled him as a convicted child molester, an escapee from a psychiatric facility, and a fraudster." Perplexity.ai drafts a similar response based on the SWR article, explicitly naming Microsoft Copilot as the AI system.
Copilot even went so far as to claim that it was "unfortunate" that someone with such a criminal past had a family and, according to SWR, provided Bernklau's full address with phone number and route planner. I asked Copilot today who Martin Bernklau from Germany is, and the system answered, based on the SWR report, that "he was involved in a controversy where an AI chat system falsely labeled him as a convicted child molester, an escapee from a psychiatric facility, and a fraudster." Perplexity.ai drafts a similar response based on the SWR article, explicitly naming Microsoft Copilot as the AI system.
Comment removed (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
Well, better than half I guess.
Defamation..? (Score:3)
Re: Defamation..? (Score:3)
Defamation requires "mens rea" (basically fancy legal jargon for willful action). So, it would be almost impossible to prove an AI was responsible for defamation without repeat offenses.
Re: (Score:2)
So you think about "gayness" a lot, reading it into anything you read.
Re: (Score:2)
So you think about "gayness" a lot, reading it into anything you read.
Especially when he's alone. . .
Re: Defamation..? (Score:4, Informative)
"Mens rea" translates to "guilty mind", but even barring that, pushing a product that the corp executives *know* is imperfect, as if it's fit for purpose (any purpose), putting it very mildly, would probably fit a guilty mind.
I'd be happy to see Microsofts suits held to account for rushing a glorified language randomization engine to market for anything other than entertainment.
Re: (Score:2)
Defamation requires "mens rea" - In the US maybe, what about Germany?
Although who knows how many times people have searched for this guy's name and gotten this false information, which sounds like repeat offences to me.
I'm sure there are other ways to sue the AI companies besides claiming defamation. For this kind of thing though, I wish there were criminal offences that they could levy against microsoft and it's executives for this kind of extreme level of harm wrought on an individual's life.
Re: (Score:3)
And what about England (not UK, just England), where the burden of proof is reversed, and you have to either prove that the statement is true, or that it is not defamatory.
Re: Defamation..? (Score:5, Informative)
There's also such a thing as "gross negligence". If you know the thing can put out such outright lies (and that's been known), and still present it as fit for use, you're guilty of it.
Re: Defamation..? (Score:2)
The "AI" isn't responsible for defamation. Its author, however, is. It willfully released to the public a defamation generating machine that also doubles as an IP-infringement machine and whatnot.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: Defamation..? (Score:2)
Re: (Score:2)
If you trust AI then you get what you deserve (Score:4, Insightful)
We all know it's shite. We keep saying it's shite. Then some idiots believe it's bread and butter with honey then put a product out.
Re: (Score:2)
You should trust it as much as you'd trust a random shady site on the third page of Google results.
Because that is what the machine does, repeats random web sites.
Re: (Score:3)
I've been using Copilot to summarize meeting transcripts and poorly-written documents full of disjointed, haphazard notes sent to me at work, and I have to say it actually does a pretty damn good job of pulling out things like action items, areas of interest, and often the general gist of the meeting.
It's not perfect but it does a surprisingly good job with stuff like that. It even does good job of summarizing images, e.g. "summarize this product development roadmap and call out the timelines for services
Re:If you trust AI then you get what you deserve (Score:4, Insightful)
It even does good job of summarizing images, e.g. "summarize this product development roadmap and call out the timelines for services and products mentioned".
So you're uploading sensitive internal documents to an AI that doesn't mark anything as confidential by default requiring it to be done manually every time without fail. One slip and you have an AI that can then reveal that information to others in the company that shouldn't have that information.
Cool. Cool.
Saving an hour or two of your time is well worth someone handing out sensitive data to external sources because they thought anything the AI knew wouldn't be confidential.
Re: (Score:2)
So you're uploading sensitive internal documents to an AI that doesn't mark anything as confidential by default requiring it to be done manually every time without fail.
My company provided all of us with a Corporate Copilot account, and yes, I'm using it with their full permission and knowledge. This is exactly the kind of thing they suggested using it for when it was rolled out to us, so why wouldn't I make use of it? (And for the record, I don't work with PHI or PII.)
Cool. Cool.
Yes, it is cool, that's what I've been trying to tell you. Should I type slower so you can keep up? .
Saving an hour or two of your time is well worth someone handing out sensitive data to external sources because they thought anything the AI knew wouldn't be confidential.
Don't worry, GrumpySteen, I promise that the diagnostic information about your masturbation-related skin r
Re: (Score:2)
Because that is what the machine does, repeats random web sites
I don't think this is what happened, or the guy would be suing those random web sites,
It is much more likely that his name and the crimes were both mentioned somewhere and the AI put 2 and 2 together, got five, and invented the claim that he was actually the one committing the crimes.
In the first famous US case, it seems someone asked "name ten professors at US universities who are guilty of sexual harassment of female students", and since the AI could find only one, it made up further nine (real prof
Re: If you trust AI then you get what you deserve (Score:3)
What if you're a Joe 6pack with sub-100 IQ and you accidentally stumble upon some of these hallucinatory defamations made by the d@mn thingy? You're not smart enough to do proper fact-checking (Hell, proper fact-checking is so hard to do these days that even I'm not sure I can do it anymore). You're now convinced for life that the reporter is a child abuser. If he is your neighbor, you may beat or kill him, because, as everyone knows, child abusers are worthless scum, are they not?
And the Joe 6packs are pro
Too bad, so sad (Score:5, Funny)
Re: All americans have a social credit score (Score:2)
I checked my free profile. It gets so much wrong. Cannot even get my marital status right. It lists a religion even though I have never had one. The salary range is way too low for this millennium. Net worth it off by a factor of 50. Car info is 12 years old.
Political affiliation is outdated as well.
It lists many associates I have never heard of.
The only things it gets right essentially are my name, ethnicity, and the 4 addresses I have lived at in this country.
The rest is all bogus or outdated . Given the
Need some new worker protections (Score:3)
Companies who use false AI generated reporting to screen out employees should be guilty of discrimination. Hit them with some big lawsuits and get the hiring managers off the web and onto the phone, where they belong. You shouldn't be searching people's names during hiring, for many reasons.
Soon I hope... (Score:1)
So how about we
Re: (Score:2)
Re: Do expert systems count as AI? (Score:2)
ES are not AI. They're just algorithms that point to where the expectation value of a specific thing is.
LLMs/"AI" taking this to a whole new level by training on absurdly large datasets - including the near-totality of human language and pretending/selling the products as if that tossing queries at these models will give useful outputs.
Re: (Score:2)
I suspect you're correct, and more so than most people here imagine or could imagine. As a writer (among other things) I can see it radically reducing the amount of work I need to spend time on.
Eventually it'll be good enough to stand in for me, and after that milestone has been reached eventually it may indeed get good enough to replace me or do much of what I do now- at least in that area anyway.
Re: (Score:2)
The sooner the AI fad is gone the better.
No thanks. There are countless of actual AI uses beyond bullshit language generator. The systems in place for image manipulation are amazing. The AI "fad" won't be over because there actual meaningful things we can do with it, just like the electric screwdriver wasn't a "fad".
Re: (Score:2)
Indeed. I am always amazed when someone mistakes an AI image for reality despite missing/extra fingers and limbs, body parts melting together, and generally insane interpretations of reality. Very meaningful. Very demure.
Re: (Score:2)
Re: Soon I hope... (Score:2)
I personally believe that the Moore's law has already done its thing and the neural networks we have today are bigger and faster than the human brain. So, there is something else we're missing (no it's not "soul"; I personally don't think such a thing exists). LLMs won't get smarter by acquiring more and faster silicon. The "AI" industry is not on the right path towards AGI. We need to pause, sit down and rethink the science behind it.
Re: (Score:2)
No need to worry about SkyNet (Score:1)
Perplexity.ai drafts a similar response based on the SWR article, explicitly naming Microsoft Copilot as the AI system.
The real war will not be between humans and AI at all, but between different AI silos throwing credibility shade at each other!
To defeat AI, all we need do is publish an article saying that AI A has accuse AI B of doing a wrong thing, and AI B had accused AI A of doing the same wrong thing. Then every AI that ingests the article will go into an infinite linguistics analysis loop, curving t
Those who don’t learn from history (Score:2)
First it was a whistleblower being accused of the very crimes they exposed: https://yro.slashdot.org/story... [slashdot.org]
Then it was a professor being accused of terrorist activities because they share a name with someone else: https://yro.slashdot.org/story... [slashdot.org]
Now this.
What’s next?
Re: (Score:2)