Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Technology

Reddit Mods Are Fighting To Keep AI Slop Off Subreddits (arstechnica.com) 66

Reddit moderators are struggling to police AI-generated content on the platform, according to ArsTechnica, with many expecting the challenge to intensify as the technology becomes more sophisticated. Several popular Reddit communities have implemented outright bans on AI-generated posts, citing concerns over content quality and authenticity.

The moderators of r/AskHistorians, a forum known for expert historical discussion, said that AI content "wastes our time" and could compromise the subreddit's reputation for accurate information. Moderators are currently using third-party AI detection tools, which they describe as unreliable. Many are calling on Reddit to develop its own detection system, the report said.

Reddit Mods Are Fighting To Keep AI Slop Off Subreddits

Comments Filter:
  • They can mop up with AI agents designed to prune shitty content.
    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday February 17, 2025 @01:11PM (#65173527) Homepage Journal

      AI agents can't recognize what's real any more than they can recognize what's false when they're writing it.

      The best you can do is cross-check with multiple agents, all of which might give wrong answers. And then you will still need a human to break ties at best, if not to determine if ANY of them are giving sensible answers.

      • by jhoegl ( 638955 )
        Perhaps they should introduce a rating system. one that allows people to upvote if they agree with the content, and down vote if they disagree.

        This way, even if people who disagree are wrong, those that agree should outweigh them, allowing the true articles and data to rise to the top.

        This is why you sort by controversial instead, because then you get to see the dumb bot AIs input, and hilarity ensues.

        botlaff
        • As if the current slashdot mod system works so well and isn't just an "I like/hate what you're saying" system instead of a truth/quality rating system. Adding that to Reddit won't improve anything.

          Long term AI is going to fuck the entire net on any sites that allow unverified posters and there's nothing to be done about it.

          • Long term AI is going to fuck the entire net on any sites that allow unverified posters and there's nothing to be done about it.

            How long will it be before AIs can become "verified posters"?

            • by G00F ( 241765 )

              AI can apparently pass are you human test than me....

            • I was thinking in terms of verified by credit card or other ID.

              • Thanks for the clarification. I know I'm risking some silliness here, but do you think companies might find it profitable to take out credit cards for LLM 'entities'? It's getting to the point where the sheep have less and less fleece left to shear, and creating faux people to spam sites like Reddit might be the final frontier.

                I'm not sure about the law around doing that, or how practical it would be. But it seems to me that between market saturation and the growing wealth divide, corporations are facing th

        • by mysidia ( 191772 )

          Perhaps they should introduce a rating system. one that allows people to upvote if they agree

          The AI slop often gets upvoted. Often they will post text that "looks good" on the surface as far as the average people skimming the thread can tell. The upvotes and downvotes are meaningless unless you can have some way of determining who will be more careful and discerning and use the upvote feature responsibly.

        • This is why you sort by controversial instead, because then you get to see the dumb bot AIs input, and hilarity ensues.

          It's why I put positive weights on troll and flamebait on this site. It's not because those are the posts I want to see (most of the trolls and flamebaiting that I see aren't moderated at all) but because many of the good comments catch negative mods. This feature almost but does not quite make up for the karma kap or the fact that the same people qualified to comment in a discussion are the ones who are qualified to moderate it, but they can only do one thing or the other.

    • No AI in this battle deserves to be rewarded, just keep the junk out please. The signal to noise would just be one^H^H^Hmillions of bots making dumb posts in an attempt to thwart the police bot that has banned everyone including humans that looked like AI spam.

      We are in the internet trash dump era. Good luck getting anything to stay clean.
    • They can mop up with AI agents designed to prune shitty content.

      Define “shitty”.

      It’ll help with figuring out how where the dial is between AI and human.

  • by doomday ( 948793 ) on Monday February 17, 2025 @01:07PM (#65173513)
    As an AI language model, I for one think we should welcome our AI generated text overlords.
  • So - User Generated Slop is OK?

    • by Retired Chemist ( 5039029 ) on Monday February 17, 2025 @01:10PM (#65173523)
      There is probably less of it, since it actually requires work to create it, rather than sticking a few random prompts into an AI model.
    • Yes, "slop" is such a subjective criteria it and of itself that it should be exclusive to human brains. We are more than capabale of generating our own inane thoughts, we've been doing it for centuries.

      Same with art, I would take 100 shitty artists pumping out what they can in MS Paint than another piece of prompted AI image generation. The former is actual art, the latter is just meaningless pixels

      • I would take 100 shitty artists pumping out what they can in MS Paint than another piece of prompted AI image generation.

        Feel free to pay those 100 artists.

        No? You won't and in fact can't do that? Then your opinion is irrelevant and your comment was pure posturing. It has as much value as the memes asking whether people would give up the internet for a year and live in a remote castle for a million dollars. Unless they've got the million dollars and the castle, they need to shut the fuck up. And speaking of which, unless you've got the money for those 100 shitty artists...

        • Is this a real argument? Did i bring up the concept of how the arts get funded? Are we bringing dollars to the philosophical argument over the value of a piece of paper with ink on it or digital pixels on a screen?

          If the only way to fund *the literal expression of the human condition, a thing that does not exist in nature only in our own evolved human brains* is to build systems that (in my opinion) push against and destroy those human pursuits, well, thats pretty bleak and makes my point. Hell your own co

      • ... I would take 100 shitty artists pumping out what they can in MS Paint than another piece of prompted AI image generation. The former is actual art, the latter is just meaningless pixels

        My knee-jerk response is to agree wholeheartedly with you. Sadly, that thought was followed closely by "beauty is in the eye of the beholder". Hot on its heels came the question that if the people experiencing the art are unable to tell which images are AI-generated, is there any meaningful difference between human art and AI art?

        On a purely emotional level, I hate considering that last question. Intellectual honesty demands that I ask it anyway; not asking it paves the way for magical thinking, which often

        • by allo ( 1728082 )

          Ask the question: Does it matter?

          Also not all AI art is low effort. People use the term "slop" until it loses all meaning, but there is many AI art, that actually deserves the term art.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      So - User Generated Slop is OK?

      To this current group of mods, yes, both OK and in demand.

      It was the last group of mods that tried their best to get people behind convincing reddits CEO to not sell off and monetize users content.
      Not enough people cared, and those mods got banned and replaced.

      The replacement mods are 100% pro-selling off their work.
      They also only exist because "what reddit wants, reddit gets"
      Reddit is pushing its own "AI Answers" to re-collect data to sell to OpenAI and googles AI thing.

      The new mods won't stand for any lef

    • Yes. That's what Reddit is all about, isn't it?
      It was already slop before AI.

      But for the sake of argument, take what you said or implied and extend it to a logical conclusion. Would you want to interact with 100% machine generated posts?
      • I think a lot of people prefer to interact with machines that agree with them than humans that don't.

        • Ahh yes, the "participation trophy" of human discourse.

          Just like when I play Counterstrike against easy-mode bots instead of other people I'm the best player in the world, 100% win rate!

    • by Calydor ( 739835 )

      People are limited by time. AI is limited by how much computing power you can throw at it.

    • -99 dB SNR (Score:5, Insightful)

      by OrangeTide ( 124937 ) on Monday February 17, 2025 @05:49PM (#65174231) Homepage Journal

      While user-generated slop seems endless, there are upper limits on what users can produce on their own.

      With only the smallest amount of direction, AI can generate noise that will swamp a text-based forum. To a degree that actual users will find essentially zero value in using the forum. In a way this can be a chilling effect to the function of a service, and should be considered an existential threat to any business that depends on long-term user engagement.

  • Too Late (Score:4, Insightful)

    by Conchobair ( 1648793 ) on Monday February 17, 2025 @01:19PM (#65173545)
    I would say most content on reddit is repost bots already with bots repeating comments from the last time whatever it is was posted. This is already a lot of slop to go around on reddit. Then combine that with astroturfing and coordinated agenda posting. It's hardly useable anymore outside of a few small subreddits.
  • Profitability (Score:5, Interesting)

    by Baron_Yam ( 643147 ) on Monday February 17, 2025 @01:39PM (#65173587)

    This is where we find out social media is too expensive to be ad-supported. The minimum moderation requirements are going up exponentially as automated enshitification gains a beachhead. And that's on top of the human-based enshitification that was already well underway and is now AI-enhanced.

    Usable centralized social media is going to require a paid sign-up linked to your credit card. Decentralized social media will vastly decrease in potential as it will be limited in reach to 'web of trust' systems.

    • Re:Profitability (Score:4, Insightful)

      by Dracos ( 107777 ) on Monday February 17, 2025 @02:29PM (#65173671)

      Social media platforms have zero user-facing incentive to limit any kind of content.

      Content => engagement => ad views => revenue.

      They only limit content when it might affect ad buys.

      Only the users care about the user experience.

      • Yes, those AI bots are famously generous spenders. /s

        This should also give serious concern to advertisers who's views are being wasted, but they're still paying for.

    • Re:Profitability (Score:5, Insightful)

      by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday February 17, 2025 @03:25PM (#65173827) Homepage Journal

      This is where we find out social media is too expensive to be ad-supported. The minimum moderation requirements are going up exponentially as automated enshitification gains a beachhead.

      Reddit had effective moderation for free. Then they enshittified in ways that drove away moderators, like their infamously charging for the API access needed to feasibly perform that moderation which we discussed numerous times here on Slashdot. They made this change as a means of making themselves more attractive/valuable, and it made them less so. Part of the change was also selling themselves as a source of AI training data; driving away the moderators has made them unviable for that, because now there is nobody to kill the garbage.

      So much for the golden goose.

      • It wasn't all that effective back then. Reddit has turned into a grift to rip off its idiotic users by becoming a meme stock. It's doing a post turdification victory lap around the bowl.

    • The users of social media are not the customers. They are the product in the form of their data. How else can you efficiently build profiles of consumers? AI and bots consume nothing, but perhaps their purpose is the fuel the outrage machine that keeps users coming back for more of the game, thus handing over more data? The only move is to refuse to play.

  • Every social media has a bunch of AI-generated crap on it. Youtube has videos written, edited, and voiced by AI on it. Stock photo sites have AI-generated photos for sale, and only some of them mark them as such.

    This is the new normal. It's only going to get harder to detect from here. People have already had real photos rejected from photo contests due to AI accusations. Trying to stop it feels nearly impossible.

  • ...a cryptographically secure method of identifying AI generated stuff
    It can be good or bad, but it needs to be accurately labelled

    • womp womp

      The best you can do is signed sources of non-AI generated stuff. Even then there are potential issues, mostly around defeating the protection of the key.

  • by fuzzyfuzzyfungus ( 1223518 ) on Monday February 17, 2025 @02:45PM (#65173713) Journal
    This all seems a little curious given Reddit management's attempt to pivot to selling themselves as a source of training data.

    From the papers on 'model collapse' it appears that going full inhuman centipede can be downright harmful and, at best, there's no reason for someone who already has a bot that can churn out bot slop to pay a 3rd party to scrape the bot slop off their site; so a failure to keep the bot slop to a minimum seems like would both reduce their value to potential customers directly and upset their remaining human users and discourage their engagement, further reducing their value indirectly.
    • This all seems a little curious given Reddit management's attempt to pivot to selling themselves as a source of training data.

      It is not curious, it is hilarious. Company wants to jump on the AI hype train not expecting leopards to eat their face LOL! They were kind of a douchey company to begin with so I don't feel at all bad about laughing. Hahaha!

    • This all seems a little curious given Reddit management's attempt to pivot to selling themselves as a source of training data.

      No, that makes it completely logical. What people developing AI want for training data is not a bunch of AI-generated data.

  • Why bother? (Score:4, Insightful)

    by djp2204 ( 713741 ) on Monday February 17, 2025 @03:09PM (#65173793)

    Let Reddit turn into a mess of bots replying to bots. Time is the most valuable thing we have. Why volunteer waste it as a volunteer to police that mess?

    • by Rujiel ( 1632063 )

      Bots on bots is what it already has been for years. I'm pretty sure this was part of OpenAI's arrangement with reddit, to begin with, that they'd be not only scraping reddit but botting on it too (which makes the premise of this headline questionable). Take a look at the ukraine subreddit for some examples.

  • And makes it impossible to train new AIs with the polluted content... We should maybe go back to actual curated content and pay for it. Maybe the "free information" on the net have peaked?
  • Hey - let's just return to (peer-reviewed) books. The internet is probably lost now as it is.
  • It's not like it's going to devalue the current posts.
  • by PubJeezy ( 10299395 ) on Monday February 17, 2025 @07:16PM (#65174419)
    The problem with Reddit isn't the "how" it's the "why". All of the popular subs are entirely dominated by engagement baiters, building up useful metrics on their swarms of accounts. Go to any of the popular story subs (AITA, Anti-Work, etc...) and if you read 5 stories, you'll start to feel it. They're just not real. Maybe some of it's AI slop but all of it's spam. Why is this happening? There are two possbilities:

    1. Account Mills. You can hope on google right now and find many websites selling large numbers of reddit accounts. And an account that's previously had long term positive engagement is more valuable to spammers than a blank or negative account. So the mods work with engagement baiters in order to grow their subs. Now the engagement baiters have a force multiplier in the form of chatbots and they're producing so many of these fake stories on a massive number of brand new accounts and it just doesn't look plausible.

    2. Reddit is doing reddit things. Maybe the swarms are coming from inside the building. The current CEO Reddit, Steve Huffman, has said in interviewers that employees filled the website with fake accounts and fake comments in order to make it seem more popular to investors. (https://arstechnica.com/information-technology/2012/06/reddit-founders-made-hundreds-of-fake-profiles-so-site-looked-popular/)

    Fake accounts and spambots have been a part of the Reddit story. from the very beginning. My analysis is somewhat reductive. The reality is that it's probably a hybrid of the two. I always think about the game theory of what would happen if a platform was taking their marketing budget and spending buying up accounts on account mills. This would encourage account sellers to generate more and more accounts in order to meet this demand. It seems like if you game that out it would create a situation much like the scenario we're in today where spammers are building engagement on swarms of accounts and no one has any explanation for why anyone would spend their time and resources on such behavior. Account mills and internally-driven metric inflation are pretty solid answers to the question of why folks are filling Reddit with spam. The "how" isn't the real problem. They need to solve the "why".
  • They went about their micro-fascist clique ways and this is what they get - AI slop instead of good user-generated content. Let them wallow in the shit pile they created.

  • In my experience, many Reddit mods are some of the dumbest people to sit between a keyboard and chair, and I've worked a tech support line.

    The flat out wrong interpretations of what you said leading to a ban is beyond ridiculous. No, "neutralize" doesn't mean "kill" and is not a threat or an encouragement to violence, you pig ignorant dumbasses.

  • That would be hilarious! Part of the EULA of Redit could be--that you could get sued for soiling it with AI trash.
  • If you are using reddit as a reliable source of information then you really don't need to worry about AI.

Mathematicians practice absolute freedom. -- Henry Adams

Working...