Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI News

AI Assistants Misrepresent News Content 45% of the Time (bbc.co.uk) 112

An anonymous reader quotes a report from the BBC: New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants -- already a daily information gateway for millions of people -- routinely misrepresent news content no matter which language, territory, or AI platform is tested. The intensive international study of unprecedented scope and scale was launched at the EBU News Assembly, in Naples. Involving 22 public service media (PSM) organizations in 18 countries working in 14 languages, it identified multiple systemic issues across four leading AI tools. Professional journalists from participating PSM evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context.

Key findings:
- 45% of all AI answers had at least one significant issue.
- 31% of responses showed serious sourcing problems - missing, misleading, or incorrect attributions.
- 20% contained major accuracy issues, including hallucinated details and outdated information.
- Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
- Comparison between the BBC's results earlier this year and this study show some improvements but still high levels of errors.
The team has released a News Integrity in AI Assistants Toolkit to help develop solutions to these problems and boost users' media literacy. They're also urging regulators to enforce laws on information integrity and continue independent monitoring of AI assistants.
This discussion has been archived. No new comments can be posted.

AI Assistants Misrepresent News Content 45% of the Time

Comments Filter:
  • The average human fouls it up 46%.

    • My subjective experience from a fairly long sample timescale (I'm pretty old) is that 80% of people are either dumb, or don't use their brains and are therefore effectively wrong 80% of the time.

      Thinking is actually hard work!

    • If we're being honest, looking into each statement we make with fine granularity, how accurate are we really? I think i might be 15% completely accurate. But then its on to is the correct answer "half empty" or "half full" which depends upon how you look at it (bias). I asked some LLM all sorts of questions about poison pill updates versus compatibility management (I think, maybe it was some other weasle word phrase that means they'll yank functionality on a whim to force you to new products), and the AI se
    • by ukoda ( 537183 )
      Bogus number aside on an individual basis that may actually be somewhat true, but reputable news outlets have other humans checking stuff and if they are doing their job right the foul ups are fairly rare.

      The big problem with the current AI is there are no guard rails or checks as there is is no magic fix to their limitations. With the money to be made from AI slop there no fix in sight.
  • by parityshrimp ( 6342140 ) on Wednesday October 22, 2025 @07:03PM (#65744376)

    AI: It's a dangerous way of not thinking.

    • To be fair, we basically outsource everything, so why not the news & truth. 1984 (the book) was so prescient.
  • by quenda ( 644621 ) on Wednesday October 22, 2025 @07:24PM (#65744396)

    They should get AI to write the Slashdot summaries.

    It seems like every criticism I hear about AI could also be applied to humans. Sometimes more so.
    AI confidently gives an answer when it doesn't know? check!
    Lack of transparency for the process of coming to a conclusion? check!
    Rationalisation - explaining the reasoning for a conclusion retrospectively. check!

    AI output is only bad if you go in expecting it to be perfect, and not checking the results.
    These are amazing tools when used correctly. Complaining about AI errors is like if someone showed you a talking dog, and you found fault with its grammar.

    • Was thinking same. This sounds exactly like a person. Honestly, I'm not sure why anyone is surprised at this point. It's pretty clear at this point that the flaws aka hallucinations in the the outputs are not mistakes, but looks like the normal functioning of the model.
    • I like to say that AI is like a friend who doesn't know anything about the issues that just happens to read a set of random news articles over the last six months and tries his best to regurgitate what he heard in an authoritative tone. They don't understand context, timelines, etc. Just that it sounds kind of right.
      • by quenda ( 644621 )

        I like to say ... They don't understand context, timelines, etc. Just that it sounds kind of right.

        I feel you have not given much thought to what "understanding" is.
        People often say such things as you have, based on an intuitive understanding that does not stand up to scrutiny. "It just feels kind of right".

        I could ask an AI what "understanding: is, and get a better answer than from 98% of people, but of course that is the type of answer that can come from regurgitating reading. The real proof is when you go into the details with more complex iterative queries, and (if) the AI understands your question

    • by ukoda ( 537183 )

      AI output is only bad if you go in expecting it to be perfect, and not checking the results.

      And there lays the core problem with the current AI. Not that it has limitations, but the bulk of users simply trust it. "Checking the results" would require effort and thinking, exactly the things AI claims to save you having to do.

  • by account_deleted ( 4530225 ) on Wednesday October 22, 2025 @07:58PM (#65744452)
    Comment removed based on user account deletion
    • Comment removed based on user account deletion
      • Re:Not just AI (Score:5, Insightful)

        by gweihir ( 88907 ) on Wednesday October 22, 2025 @08:05PM (#65744470)

        The difference is that news media skew intentionally, while "AI" is simply dumb.

      • Re: (Score:3, Insightful)

        by ndsurvivor ( 891239 )
        I prefer the news that is broadcast and regulated by the pre-trumpian FCC. If they told provable, blatant lies, then they would be fined large amounts of money. Unfortunately outlets like FOX cable can tell provable, blatant lies only on cable, became popular to some groups. If you compare broadcast FOX news to cable FOX news, there is a large difference. I think that if CNN were broadcast, I don't think there would be any fines.
        • by Anonymous Coward

          Unfortunately outlets like FOX cable can tell provable, blatant lies only on cable, became popular to some groups.

          so - the lies from CNN / NBC / ABC / MSNBC is OK....but how dare fox lie.....

        • Congratulations, you just discovered that local news broadcasts are less biased than national news. Local NBC broadcasts are more accurate and less biased than MSNBC. Local CBS and NBC news broadcasts are more accurate and less biased than their national nightly news shows.
      • by ukoda ( 537183 )
        I think you have a quality vs quantity issue at play. The problems is there is only a small quantity of quality new media outlets, such as the BBC, but in this era of social media the reach of the BBC is the same at that guy in the office with weird theories that you avoid talking to for sanity reasons.
  • Totally usable and helpful, right?

  • The details matter.

    Were they asking:
    - What is today's most important news?
    - What is news from my country?

    Or were the questions more specific, like:
    - What caused the AWS outage Monday?
    - Whatever happened to the couple caught on the jumbotron at the Cold Play concert?

    I would expect AI to do much better with the latter, than the former.

    • by allo ( 1728082 )

      It all depends on a reasonable system supporting the AI. Give it the content of a few RSS feeds into the context and a tool to fetch the pages, and it will provide you a good overview about the daily news. For the specific questions you most likely would like a more agentic system. If the RSS feed has some article about AWS that's fine, but if the article doesn't go into detail and is not written for experts, you get the best result if the AI is able to "google" for more technical information.

  • So of the 45% that had problems:
    - 31% had attribution errors. Yeah, we know, AI is terrible at attributions.
    - 20% had accuracy issues, including outdated information and hallucinated details. The proportion of these two types of errors is important. "Outdated information" is everywhere on the internet, AI or not. I wouldn't blame AI for that problem. Hallucinated details are a lot worse. What portion of the 20% was hallucinated? I'd say that something less than 20% having hallucinated details isn't as bad a

    • The AI vendors choose how to train their AI and don't have to simply let it loose on the Internet to learn without guidance. It is absolutely on them if the information they learn on is out of date. I always look for the date on a news story when it matters: no reason why AI cannot.
  • This is pretty amazing.

    How does this compare to human readers?

    Are the problems with AI or are the source articles the issue?
  • by dohzer ( 867770 ) on Wednesday October 22, 2025 @11:37PM (#65744694)

    That's why I used Ground News to determine which news sources are unbiased. LOL. JK.

    • And now a word from our sponsor!

      That's why I used Ground News to determine which news sources are unbiased. LOL. JK.

      *returns to footage of 16yo youtube influencer attempting the "light your own face on fire with a blowtorch challenge".*

      Dont forget to like and subscribe!

  • Over here in the Netherlands next week we have parliamentary elections.
    Because of the mess politics made during the past two governments these are very significant elections.
    However the outcome will be, the next government will need a coalition of four to five parties for a majority in parliament.
    A lot of people (the dumb half) are not sure who to vote for and they ask ChatGPT, the sad thing is this system seems to only know two parties while there are twenty five on the ballot!
  • "Then toss a coin to see how accurate it will be."

    This is why I will not touch this stuff. At all.

  • Only 45%? Journlolists are obsolete then

  • I came to a similar conclusion about a year ago. I have an app that, among other things, lists news headlines for local communities. Some news sources provide a short summary of the article as well, but many do not. If no summary is provided then I'm relegated to using the first sentence or so from an article.

    I'd hoped to use AI to generate that summary when given the body of the article, but no matter how I prompted it would fabricate "facts" into the summary far too often for me to actually feel comfortab

  • The LLMs are trained with word association of past events. When given new information they intermingle past word chains with current word chains. That is the expected behavior.

  • by argStyopa ( 232550 ) on Thursday October 23, 2025 @11:05AM (#65745604) Journal

    In my experience about fields I know quite well, human reporters and news agencies get it all or partially wrong probably 80% of the time.

    Don't get me wrong, I'm not defending AI in the slightest, just that in 2025 pretty much all news sources are mostly shit.

  • 25% of news was misrepresented by repeating Democrat talking points.
    25% of news was misrepresented by repeating Republican talking points.

God is real, unless declared integer.

Working...