Forgot your password?
typodupeerror
AI The Internet News

AI Is Intensifying a 'Collapse' of Trust Online, Experts Say (nbcnews.com) 60

Experts interviewed by NBC News warn that the rapid spread of AI-generated images and videos is accelerating an online trust breakdown, especially during fast-moving news events where context is scarce. From the report: President Donald Trump's Venezuela operation almost immediately spurred the spread of AI-generated images, old videos and altered photos across social media. On Wednesday, after an Immigration and Customs Enforcement officer fatally shot a woman in her car, many online circulated a fake, most likely AI-edited image of the scene that appears to be based on real video. Others used AI in attempts to digitally remove the mask of the ICE officer who shot her.

The confusion around AI content comes as many social media platforms, which pay creators for engagement, have given users incentives to recycle old photos and videos to ramp up emotion around viral news moments. The amalgam of misinformation, experts say, is creating a heightened erosion of trust online -- especially when it mixes with authentic evidence. "As we start to worry about AI, it will likely, at least in the short term, undermine our trust default -- that is, that we believe communication until we have some reason to disbelieve," said Jeff Hancock, founding director of the Stanford Social Media Lab. "That's going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces."

Though AI is the latest technology to spark concern about surging misinformation, similar trust breakdowns have cycled through history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented in the 1400s. Before AI, there was Photoshop, and before Photoshop, there were analog image manipulation techniques. Fast-moving news events are where manipulated media have the biggest effect, because they fill in for the broad lack of information, Hancock said.
"In terms of just looking at an image or a video, it will essentially become impossible to detect if it's fake. I think that we're getting close to that point, if we're not already there," said Hancock. "The old sort of AI literacy ideas of 'let's just look at the number of fingers' and things like that are likely to go away."

Renee Hobbs, a professor of communication studies at the University of Rhode Island, added: "If constant doubt and anxiety about what to trust is the norm, then actually, disengagement is a logical response. It's a coping mechanism. And then when people stop caring about whether something's true or not, then the danger is not just deception, but actually it's worse than that. It's the whole collapse of even being motivated to seek truth."
This discussion has been archived. No new comments can be posted.

AI Is Intensifying a 'Collapse' of Trust Online, Experts Say

Comments Filter:
  • Country of Origin (Score:5, Interesting)

    by NaCh0 ( 6124 ) on Friday January 09, 2026 @10:09PM (#65913990) Homepage

    Other social media platforms should follow X's lead and show account country of origin for users.

    Many of the accounts stoking engagement aren't from the country where the event took place. Viewers of the post should know this.

  • undermine our trust default -- that is, that we believe communication until we have some reason to disbelieve," said Jeff Hancock, founding director of the Stanford Social Media Lab. "That's going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces. I wish!
    • somebody reading my blog, :)
      I wrote this three weeks ago

      "AI is probably accelerating the "Law of Demeter" effect on the Internet and information"

      https://www.scry.llc/2025/12/2... [scry.llc]

      Distance + diversity + time = declining trust

      the majority model now is devolution back to physical validation. it's too easy to spoof anything online, which I first predicted at DEFCON in 2006 regarding clickfraud.

  • Good. (Score:5, Insightful)

    by msauve ( 701917 ) on Friday January 09, 2026 @10:25PM (#65914012)
    There should be an erosion of trust online, because it shouldn't have been trusted in the first place. Trust is earned, it's not "I saw it on the Internet." And generally, skepticism and critical thinking skills have been pretty much absent for a long time.
    • Thank you, I was thinking just this as I was scrolling. People should be suspicious unless they know the reputation of the source.
    • There should be an erosion of trust online

      You know what, before I might have gone with this but more recently, I'm not so sure. I mean, you could be some bot promoting distrust online!
      Therefore, I reject your mandate of distrust and will now trust all online advice!
      Hence, I trust you and will be distrusting of people online.
      As a result, you can't be trusted.
      Thusly, I trust you,
      Ergo,
      Kernel panic – not syncing: Attempted to kill init!

    • by mjwx ( 966435 )

      There should be an erosion of trust online, because it shouldn't have been trusted in the first place. Trust is earned, it's not "I saw it on the Internet." And generally, skepticism and critical thinking skills have been pretty much absent for a long time.

      The problem we're encountering now is that lies have become so common and so repeated that those who've been indoctrinated by them will now ignore the evidence of their own eyes and ears. This started before the internet became the standard for communication, in particular with channels like Fox News. You've now got people who've spent 30 years living in a bubble of their own reality, constantly rejecting any form of information they don't agree with and reinforced by the likes of Fox to create a near impen

  • Oh please (Score:5, Insightful)

    by quintessencesluglord ( 652360 ) on Friday January 09, 2026 @10:29PM (#65914016)

    Distrust has been growing exponentially prior to the rise of LLMs, and video is little more than an added piece of propaganda people must wade through.

    This is just a symptom of a larger problem of the decay of institutions, news in service of money, and people too willing to believe anything that fits their biases.

    Algorithms certainly don't help, but I'll be damned if anyone really wants to fix anything beyond if it helps their side.

    Fuck 'em.

    • by Bert64 ( 520050 )

      Fake/modified video is also not something new. AI has just made it all more accessible.
      It's always been possible to create fake videos if you have sufficient resources - what do you think movies are?

      • Prime example: the moon landing.

        /jk (oh what has the world come to that I feel the need to include that disclaimer...!)

    • ... and video is little more than an added piece of propaganda people must wade through. ....

      Yeah. In general, if it's not in text, I presume it's bullshit and don't bother following links/recommendations to video. The only exception is where a trusted source posts a link to a video on a topic I would actually like to see rather than just read. Otherwise, If you can't be bothered writing it down, I can't be bothered watching. YMMV, of course. A lot of people would rather watch videos than read.

  • It won't just be trust AI collapses...
  • by MpVpRb ( 1423381 ) on Friday January 09, 2026 @11:23PM (#65914084)

    Trusting social media posts without verification is bad
    Ideally, as people get skeptical, they will use multiple, trusted sources and common sense
    Unfortunately, what often happens is that people reject everything except for the most crazy and fringe

    • by martin-boundary ( 547041 ) on Friday January 09, 2026 @11:42PM (#65914116)
      We do this in the real world: we trust our close friends, and we trust people who are trusted by our close friends less. We also adjust trust up and down on the people we trust, over time, when we can verify their claims.

      The idea of trusting a *blog post* or a document intrinsically just because it exists, purely by looking at the contents and not the author, is not generally how people will operate. It's purely a scientific ideal, to treat arguments and ideas without regard to the person who brings them.

      So this is what will happen more and more: members only, closed communications among people who know each other in real life, or who have been introduced to each other by mutual friends. Summary rejection of people, letters, news, photographs that don't have a direct physical connection to one's reality. But a photograph actually taken by a friend who shows it you on their phone in person will be accepted.

      It will be a change from the 20th century mass communication revolution and the early 21st century online communication revolution, back to the earlier human principle of village and clan oriented world-views and direct physical communication in a room.

  • I do wonder whether this might actually be by design.

    Perhaps the research and development of all of this was partly driven so that absolutely anything at all can be disregarded as fake, in the end.

  • Good. (Score:4, Insightful)

    by Bert64 ( 520050 ) <`bert' `at' `slashdot.firenzee.com'> on Saturday January 10, 2026 @02:12AM (#65914234) Homepage

    People should not blindly trust what they read online (or anywhere else for that matter). They need to learn to question what they read, and check via multiple sources.

    An educated and questioning population is the only real solution to any of the problems caused by misinformation and propaganda.

  • Who in the god damned hell trusted random shit they saw online in the first place? I could be a Bolivian pig trained to type out internet comments for food, you don't know.
  • by Bobknobber ( 10314401 ) on Saturday January 10, 2026 @03:03AM (#65914278)

    The reality is that there are very few incentives for fixing these issues, especially since those who could were the ones who created these problems.

    Social media is not in the business of cracking down on misinformation anymore than your average MSM outlet is. If anything there is more money to be had spreading fake news since it attracts more engagement and plays into whatever political power games are going on in the shadows.

    Ultimately, the Information Age was effectively doomed to fail from the get-go. There are very few scenarios that do not involve either mass censorship or regressing back to pre-mass media forms of information and communication.

  • There was never real trust — at least not for people under 30.
    Maybe we older adults fell for it occasionally, but my kids never assumed something was true just because it appeared online (or on TV, or similar).

    • My experience of people under 30 is that they know little beyond what they are told, and that the internet age has exacerbated the problem as they are constantly told bullshit. This includes graduates from top universities I've spoken with. Those universities, like everywhere else, are just echo chambers these days.

    • Your experience of under 30's is fundamentally different to mine, if anything, what i see is they generally trust almostany shit.
  • by bloodhawk ( 813939 ) on Saturday January 10, 2026 @05:05AM (#65914410)
    Sounds extremely positive, the shit people trust online is appalling, if AI is making them think and not automatically trust this is a huge win for society
  • by NotEmmanuelGoldstein ( 6423622 ) on Saturday January 10, 2026 @05:42AM (#65914440)
    "Don't believe everything you read" has been advice for centuries: Monks in monasteries frequently wrote about the world and nature they had never seen.

    Faked photographs are almost as old as photography. The same with moving pictures. Until recently, the tools to make fake video [youtube.com] required PhD. level skill in manipulating images. Now that video can be faked by anyone, the problem is, the majority of faked videos are propaganda and misinformation.

  • Paywalls on news websites eroded my trust way more than AI slop did.

    • Paywalls are the problem? I remember when I had to pay for my newspapers and magazines.

      The real issue is that journalism is dead. The internet killed it.

      • More specifically clickbait killed journalism. I once had dinner about a decade or so back with the editor of a major media org, after a few drinks i put it to him the damage that clickbait and poor journalism was doing. His response was he hates it, but they are a business and hence they have to serve up what the audience demands otherwise he has to sack people due to decline in readers. Clickbait and poor quality articles exist precisely because that is all that gets eyes on articles and h3nce pays for pe
        • News has always been about getting people to read it, even when there were laws about public responsibility. Laws the Repubs killed.

          • yes however the way the internet works changed the ways in which they publish, they intentionally remove details for summaries and use clickbait headlines to ensure people cannot fully understand what the story is about before reading. This moved the editing focus from quality of the story to ensuring those first few paragraphs and headline are built for clicks, the rest of the story no longer matters to the editors.
    • I'm also guessing you despise advertising...

    • Why?

      Serious question: paywalls are the only form of funding a news organization can get that makes them beholden to their readers, not corporate or political interests.

      Why are you against them?

  • Our current "collapse of trust" in online and other forms of media is only tangentially intensified by AI. If I had to pick out one dominant factor, it would be the charged political and ideological environment that has emerged over the last three decades but has exponentially grown over the last ten years. Newspapers, TV, online video and articles, books, social media, and even face-to-face interactions have truth-collapsed well before AI. AI is just going along for the ride.

We can predict everything, except the future.

Working...