Forgot your password?
typodupeerror
Microsoft AI IT

UK Police Blame Microsoft Copilot for Intelligence Mistake (theverge.com) 60

The chief constable of one of Britain's largest police forces has admitted that Microsoft's Copilot AI assistant made a mistake in a football (soccer) intelligence report. From a report: The report, which led to Israeli football fans being banned from a match last year, included a nonexistent match between West Ham and Maccabi Tel Aviv.

Copilot hallucinated the game and West Midlands Police included the error in its intelligence report without fact checking it. "On Friday afternoon I became aware that the erroneous result concerning the West Ham v Maccabi Tel Aviv match arose as result of a use of Microsoft Co Pilot [sic]," says Craig Guildford, chief constable of West Midlands Police, in a letter to the Home Affairs Committee earlier this week. Guildford previously denied in December that the West Midlands Police had used AI to prepare the report, blaming "social media scraping" for the error.

This discussion has been archived. No new comments can be posted.

UK Police Blame Microsoft Copilot for Intelligence Mistake

Comments Filter:
  • by Rosco P. Coltrane ( 209368 ) on Wednesday January 14, 2026 @11:22AM (#65923776)

    AI told the fuzz BS, the fuzz swallowed it and didn't double-check it.

    I call that a police mistake.

    • It is a police mistake. All forensic databases and techniques are fallible, and the police are responsible for using them responsibly.

      The Verge wrote a story title that the police are "blaming" Microsoft Copilot, but I don't see anything to support that in the article. I would say the police are "admitting" that, prior to contrary statements, they used Copilot and failed to verify its output.

      • Re: (Score:1, Flamebait)

        by Viol8 ( 599362 )

        This was no mistake. WMP came under pressure from muslim groups in the area to ban the isrealis so went looking for a reason to do so by fabricating data. Co-pilot did NOT fabricate the supposed internal reports from the dutch police, that was all WMPs own in house work.

    • by Anonymous Coward
      You call police error a mistake, I call it standard operating procedure.
      • What I find annoying is that ignorance of the law is considered a valid excuse for mistakes made by police officers... but not for the people they are charging with crimes.
    • What, just because they delegated responsibility to an unreliable tool and didn't verify the results?

      Well, ok, that sounds about right.

    • by gweihir ( 88907 )

      The full responsibility rests with the person not facht-checking or not doing it competently. We may need some court-decisions to make that clear though.

    • Yeah. I discovered CoPilot's Amazing capabilities this week when I tried to get it to convert a .pdf into a .png. Microsoft's illustrious digital assistant didn't have a clue. Ultimately, it was Gemini that helped me out

      I had no idea that CoPilot could do things beyond finding folders and files on my computer - something that Windows Explorer's search could already do

    • Exactly. AI is fine to reduce workload, but one has to proof-read and validate everything it outputs, because it DOES hallucinate!
  • by bugs2squash ( 1132591 ) on Wednesday January 14, 2026 @11:26AM (#65923784)
    I miss the good old days when the police would fabricate evidence by hand
    • by Sloppy ( 14984 ) on Wednesday January 14, 2026 @11:49AM (#65923850) Homepage Journal

      Why waste taxpayer money on manual fabrication, when a machine can do it much more cheaply? Your crooked cops belong on the dustbin of history, alongside all the buggy whips.

      BTW, wait until you see the next Robocop movie! People liked his use of firepower in the first 3 movies, but his real talent will be in filling out dozens of plausible-sounding arrest reports per second.

  • Why deny using AI when our management tell us we have to either use it for everything or become obsolete? Everyone should admit to using LLMs. They told us to. Here's the result. We get the wrong information and can't tell the difference sometimes.

  • same old story; law enforcement gets frothed up and overreacts. at least folks weren’t locked up or murdered at their hand. and who’s going to be held accountable? who’s going to step down or at the very least be banned from something they love? nope. none of that will happen. and worse, they’ll just do it again because why not?

    ACAB

  • Quit it (Score:4, Insightful)

    by stealth_finger ( 1809752 ) on Wednesday January 14, 2026 @11:35AM (#65923808)
    Quit offloading your work to a fucking glorified chatbot. Jesus fucking Christ people.
    • Re:Quit it (Score:5, Informative)

      by nightflameauto ( 6607976 ) on Wednesday January 14, 2026 @03:16PM (#65924414)

      The mantra from the management is, "Use AI every day or you will be run over by it. Or outright replaced by it." This shit is pervasive, and perverse. Especially since it's so completely idiotic with its responses such a large percentage of the time. But there's big money involved, and by the gods of greed we *MUST* obey the big money. Because if they weren't meant to be gods, they wouldn't have the big money. Right?

      We're gonna stupid our whole species into oblivion over this shit. And the last few humans will be, "But we did what we were supposed to. The AI told us so!"

  • by Sloppy ( 14984 ) on Wednesday January 14, 2026 @11:40AM (#65923822) Homepage Journal

    Copilot hallucinated the game

    No, Copilot correctly figured out a reasonably-believable completion of its auto-completion prompt. This is a success story.

    That some people thought Copilot was stating a fact, is evidence in favor of Copilot having done exactly the right thing -- but also shows that those people don't know what LLMs do and what they are for.

    The big question is: why are the police putting the output of these super-cool toys into intelligence reports? Intelligence reports should be based on real things, not interpolations of whatever random internet text some LLM happened to be trained on.

    In other words: cops, you're holding it wrong. You shouldn't be having Copilot write or modify your reports, unless you're writing those reports for the novel, video game, etc that you're releasing once you retire from the police force. That is how to use this tech.

    • "Intelligence reports should be based on real things -" No they don't have to be. However they should be based on "Trust but verify" from at least 2 independant sources,
  • by RogueWarrior65 ( 678876 ) on Wednesday January 14, 2026 @11:43AM (#65923834)

    The story says that the AI hallucinated a game involving Israel. But the question isn't about the game nor that nobody fact-checked it but rather why it was such a big deal. What were they worried about?

    • Re: (Score:3, Interesting)

      by radarskiy ( 2874255 )

      a. The report was used to ban people from attending another soccer game, despite the fact that the report included things that were not real..
      b. The chief constable then lied about how the report was made.
      c. The affected people are Israeli fans of an Israeli team.

      The thing is, there's plenty of real evidence of fans of Maccabi Tel Aviv engaging in violent behavior and then crying anti-semitism when someone has the temerity to enforce the actual law against them. No one had to make things up.

    • by AmiMoJo ( 196126 )

      Those "fans" have history of rioting, including not that long previously in Europe. Their modus operandi is too start a fight, usually with Muslims of whom there are many in Birmingham, and then claim to be the victims. You know, the standard Israeli tactic.

  • by Anonymous Coward

    I blame the UK police for using unproven alpha test technology in production.

  • It was quickly obvious it would be completely useless. Gemini kept hallucinating the final results of the game (including the final score, along with all the plays that lead up to it) halfway through the third quarter. When I called it out on it, it would apologize profusely, promise to 'sanity check' future results against current live data... and then do the same thing again, over and over again. Which was fine, I understood what was happening, but the thing is that when it would hallucinate the end
    • You asked about things you knew and immediately fact checked the answer. If more people did this, the painful limitations of AI would be more widely known.
  • ...FifeGPT? We want old fashioned human foul-ups, like God intended!

  • . . . then that’s pretty strong evidence the conversation has deeply drifted into high-entropy RLHF (reinforcement learning from human feedback) mode. The LLM’s goal has shifted from objectivity to humoring the human.

    In that state, also expect heavy both-siding, cherry-picking, strawmanning, overgeneralizing, over-constraining, demonizing, etc: whatever it takes to cling to RLHF’s “mainstream” orthodoxy while dodging causal empirical reality.

    In this case, it seems likely someon

    • Re: (Score:1, Informative)

      This is a backfilled excuse. The head of the police in that jurisdiction was hand-picked by local extremists and the police had overwhelming hard evidence of credible plans for violence if Jewish fans weren't banned. We're talking about the same country that dropped the charges against members of a parade of cars who drove around screaming for Jews to be raped and murdered [i24news.tv] but will arrest thousands of UK citizens for tweets and thought crimes. They're inventing excuses to try and make this look like incompe

      • As far as I can tell, you’re essentially echoing my point but not necessarily agreeing with it. In this case, the LLM “backfilled” to align with the interlocutors illogic. I am not intending to “excuse” the LLM nor the police in this case.

        • Not the LLM, but the police themselves. Think of it like parallel construction. More and more evidence is coming out that the police acted out of pure malice and deliberately lied from the start. Rather than get caught with that they're now trying to retroactively cook up an excuse to claim that it wasn't deliberate malice and lies but rather idiocy and incompetence. They used the LLM to retroactively invent a cover story that makes them look stupid rather than evil.

  • by rsilvergun ( 571051 ) on Wednesday January 14, 2026 @12:33PM (#65923952)
    Cops have it, they (usually) buy a 3rd party service so they don't need a warrant. Here's a video on the topic: https://www.youtube.com/watch?... [youtube.com]

    It's classic broken windows policing only applied at an individual level thanks to computers and social media/phone tracking. Exactly like what China does with their social credit score but right now limited to the police.
  • by PPH ( 736903 ) on Wednesday January 14, 2026 @12:49PM (#65923982)

    ... would have been to see who lost money on the match in the department football pool.

  • by Bruce66423 ( 1678196 ) on Wednesday January 14, 2026 @02:16PM (#65924220)

    'Was AI involved in the preparation of this case?'

    If yes, then a short diatribe about how unreliable AI is should sow significant doubt

    If no, drill down into what software the force uses - demonstrate that there is an AI element in the software and ask for dismissal because the witness has lied

    If 'I don't know', then enjoy casting doubt on the data provided.

  • UK cops being a literal meme....again? I can't even blame MS on this one.
  • by VampireByte ( 447578 ) on Wednesday January 14, 2026 @03:05PM (#65924390)

    Soccer site in IT Crowd was better than copilot https://www.youtube.com/watch?... [youtube.com]

  • Do not get distracted, keep in mind that the same who plays victim and call you antisemitic to hide their multi-decades violations of international laws are perfectly comfortable with the "folklore" that created them [royanews.tv].
  • Always good to see lazy morons outed for being idiots.

  • "A good carpenter never blames his tools."

    The officer's failure to review/check whatever their AI tool produced is the issue. AI didn't sign the report, an officer did.

  • This morning a colleague who is always on the edge of new tech (and clearly ADHD and Asperger with an unhealthy addition of paranoia) came to me in a panic that as of yesterday Russian ships were dropping nuclear torpedoes in the Atlantic/Arctic or somesuch. He was using Gemini verbally, and asked it all kind of pointed questions, every time asking for confirmation links, sat images and more. It all sounded *very* convincing. But I'm the skeptical kind and started asking how often the satellite/sonar data was updated (and more) and eventually if it *really* had access to that data. And after 30 minutes of probing it finally admitted that it was "all made up, sorry, got carried away".

    That this 'thing' is considered ready for prime time and release to the general public is absolutely insane. AI is gonna trigger WWIII, not by gaining awareness, but by driving humans insane.

  • Why would they allow co-pilot to be enabled on police computer equipment?
    Surely this has the potential to void confidential and sensitive investigations.
    They are literally allowing private and sensitive information to go overseas. Are they using a cloud network in another country as well?

    If they want to use an AI they need to host it internally, so they can control and vet it.
    They need to log all requests and answers. As well as limiting Internet access via AI to specific requests which also need to b

There is nothing so easy but that it becomes difficult when you do it reluctantly. -- Publius Terentius Afer (Terence)

Working...