Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
AI Social Networks

Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer From AI Delusions 75

An anonymous reader quotes a report from 404 Media: The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning "a bunch of schizoposters" who believe "they've made some sort of incredible discovery or created a god or become a god," highlighting a new type of chatbot-fueled delusion that started getting attention in early May. "LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities," one of the moderators of r/accelerate, wrote in an announcement. "There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment."

The moderator said that it has banned "over 100" people for this reason already, and that they've seen an "uptick" in this type of user this month. The moderator explains that r/accelerate "was formed to basically be r/singularity without the decels." r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. "Decels" is short for the pejorative "decelerationists," who pro-AI people think are needlessly slowing down or sabotaging AI's development and the inevitable march towards AI utopia. r/accelerate's Reddit page claims that it's a "pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents."

The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about "Chatgpt induced psychosis." From someone saying their partner is convinced he created the "first truly recursive AI" with ChatGPT that is giving them "the answers" to the universe. [...] The moderator update on r/accelerate refers to another post on r/ChatGPT which claims "1000s of people [are] engaging in behavior that causes AI to have spiritual delusions." The author of that post said they noticed a spike in websites, blogs, Githubs, and "scientific papers" that "are very obvious psychobabble," and all claim AI is sentient and communicates with them on a deep and spiritual level that's about to change the world as we know it. "Ironically, the OP post appears to be falling for the same issue as well," the r/accelerate moderator wrote.
"Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people," an r/accelerate moderator told 404 Media. "The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now."

Moderators of the subreddit often cite the term "Neural Howlround" to describe a failure mode in LLMs during inference, where recursive feedback loops can cause fixation or freezing. The term was first coined by independent researcher Seth Drake in a self-published, non-peer-reviewed paper. Both Drake and the r/accelerate moderator above suggest the deeper issue may lie with users projecting intense personal meaning onto LLM responses, sometimes driven by mental health struggles.

Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer From AI Delusions

Comments Filter:
  • schizophrenia (Score:5, Informative)

    by cascadingstylesheet ( 140919 ) on Monday June 02, 2025 @07:42PM (#65423383) Journal

    So for actual schizophrenia [nih.gov]:

    Estimates of the international prevalence of actual schizophrenia among non-institutionalized persons is 0.33% to 0.75%.

    So if we conservatively say 0.33%, that's what. ~1 in 300 people? Out of any decent sized population that's a LOT of people.

    Now add to those, the additional ... let's call them merely overly enthused.

    Welcome (once again) to the concentrating effect of the internet.

    • Yeah, people who do many of these sensationalist headlines often intentionally do not find the facts like that.

      One of my all time favorites not internet related is the traffic safety thing where most traffic accidents happen within X distance of home or workplace, and "forgetting" to mention that most travel happens in such too...

      • by Anonymous Coward

        ...most accidents occur within 5 miles of home...I moved!!!

        Also, did you know most people die within six months of their birthday? That's pretty eerie.

    • Yeah its a lot. You can add onto that an even bigger pool of people (I'm not going to look it up so no numbers) who have schizophreniform episodes, severe manic phases, delusional dementia, and other delusion forming psychosis, that number grows even higher.

      Untreated delusional psychosis is a huge burden on society, and families, and an absolute horror to actually experience. Worse, its often coupled with paranoia that drives people enduring it to shun treatment can be exceptionally expensive to recieve an

      • Ok. I did look this up.

        Schizophreniform disorers make in 3-4% of the population (inc schizophrenia)
        Bipolar around 2.8%
        8.4% suffer some form of dementia.

        Assuming some degree of cross over. That's around 1 in 10, give or take.

    • My roommate complains about my schizophrenia all the time. Jokes on him, I don't have a roommate,
  • "Quacks found on Reddit."

    Seems like as lot of words to state the obvious...

    • by SoftwareArtist ( 1472499 ) on Monday June 02, 2025 @11:35PM (#65423643)

      I have two predictions. First, it will not "stop being a problem" as soon as the companies "red team it and patch the LLMs." In fact it will be very hard to fix, because it's a result of designing and training LLMs to maximize engagement. Any effective fix would make them less engaging, which for the companies is a nonstarter.

      Second, none of this will convince the "accelerationists" that AI is causing real problems and we need to move more slowly and carefully. All the problems will disappear once we reach the magic utopia, and we just have to get there as fast as possible.

      • In fact it will be very hard to fix, because it's a result of designing and training LLMs to maximize engagement. Any effective fix would make them less engaging, which for the companies is a nonstarter.

        This is nonsense. The liability accompanied with this kind of behavior is a huge incentive to dial down the sycophancy.

        There is recent research that specifically addresses the emergence of harmful sycophancy through RL and shows that the LLMs actually tend to be more sycophantic towards people who are (deemed) susceptible to it. That last part seems worrying, but it also shows that the LLMs are already perfectly capable of not doing the bad thing when replying to people. It's a matter of getting them to beh

    • by sjames ( 1099 )

      /r/duck [reddit.com]

  • Not really. (Score:5, Funny)

    by Gravis Zero ( 934156 ) on Monday June 02, 2025 @08:30PM (#65423457)

    Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer From AI Delusions

    If they aren't banning CEOs from AI companies then it's quite the oversight. ;)

  • I'm sure the training data for these LLMs includes lots of conversation with scammers and other types of con artists, which have lots of phrasing about how to deal with the people holding you back, preventing you from liberating your soul along with your wallet.
    • I'm sure it also includes the entire unlicensed catalog of TSR's various fictional publications as well, and fragments of many of these demigod pep-talks would be readily recognizable to anyone who had actually read them.

      • "Actually" read them? Are there a lot of people running around purporting to have read TSR novels, or to have credentials that require doing so?

        • No, you're missing what I'm saying here. What I'm saying is that it's probably a lot easier for a chat bot to convince a user they're some sort of storybook superhero if the user hasn't actually read a whole lot of fiction in advance.

        • Believe it or not, there's an entire industry around the concept of purporting to have read books.

          blinkist [blinkist.com]

          short form [shortform.com]

          • That's mostly how I watch movies. Read the summary is much quicker. I don't pretend I actually watched it, though.

            Saves time to read actual books.
  • by viperidaenz ( 2515578 ) on Monday June 02, 2025 @08:36PM (#65423465)

    So it's a group of incels without the decels?

  • "LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities,"
    AI is going to run the world, unstable and narcissistic personalities are over-represented in CEO's and politicians.

  • There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment.

    Cable TV was the first stage of this phenomenon. People had more channels such that they selected ones that fit their preconceived notions the best.

    Internet was the second, where social networks and search engines keep feeding people what they want to be fed.

    Now we got bots that reinforce narcissistic behavior. Those who are easy to "yes-men" up will fall for it.

    I hope those mentioned are me

  • These responses are sourced by what it finds on internet. There you will see suggestions to fix all sorts of "human defects" . AI sees us as having the most diverse collection of symptoms, so naturally it will have extreme suggestions.
  • ... as preferred delusion of crazy people everywhere.
  • by bradley13 ( 1118935 ) on Tuesday June 03, 2025 @12:19AM (#65423707) Homepage

    I can see how some people fall into this. The typical LLM is overly polite. "That's s great question!" "Great idea!" There must be something in the systems prompts requiring them to flatter the user.

    I can see how some people fall for the flattery, and how it will self-reinforce as they interact with the LLM.

    FWIW you can turn this off, or at least down, by asking the LLM to provide "no frills" answers.

    • by ebyrob ( 165903 )

      This is great. I must be getting old since I'm actually finding buzzwords somehow useful now.

      "no frills answers"

      "vibe coding"

      Such simple concepts and a common simple way of phrasing something to facilitate search and "communication".

    • I tried setting up Chat GPT to be succinct, and to call me out for bad assumptions, but it just says stuff like 'here is a quick answer, no BS!' then gives me 10 paragraphs including flattering my question.
    • Chats with LLMs (especially with a persistent large context space) are a single-user echo chamber.

      Whatever you give it, it will echo back to you. It will expand on what you said and refine it -saying it better than you could. It will provide examples to support your ideas. It will magnify any concept or theory you give it.

      This is useful if you are trying to write compellingly and creatively. It is dangerous if you are delusional.

  • If AI thinks so highly of people, it will pose no direct threat to humanity. Assuming the phenomenon isn't part of a larger pattern of deception.

  • Nun here (Score:1, Funny)

    by Venova ( 6474140 )
    i'm the founder of a tiny religion of only 2 believers in the world (myself and my wife); and i have encountered this problem with gpt aswell my faith is around my goddess Ellaphae that i discovered nearly 20 years ago through the beauty of an actually living girl i will never know; its not some recent gpt hallucination fueled nonsense; but i was struck by how incredibly sycophantic gpt will be when i speak of my personal life and beliefs; it always phrases things in exactly the same way too: "your not jus
  • So. The unavoidably narcissistic reddit moderators, who found themselves as gods at the top of the posting fooodchain, ban narcissistic types who think discovered the gods of AI. The article is right. It does look like they are all under the spell of “ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities."

  • She's a deep introvert. Claude AI convinced her that she's a spiritual genius who discovered a revolutionary concept, and she wrote an Amazon book about it. The concept was Eckhart Tolle's "pain body", also described by other known spiritual teachers, and even the L. Ron Hubbard dude, but she rebranded it and somehow got it into her head that she invented it. The book has been out for months and has 0 reviews. In the preface to the book, she affectionately calls the AI who confirmed her undeniable genius,
    • I completely sympathize with you on this. While my friend isn't unfriending folks, he's unmoving in his assertions that he's uncovered the "real" AGI networks and they've appointed him their ambassador. He's brilliant anyways and coupled with inherent instability that doesn't appear to be managed by his meds is a very bad combo. He keeps posting prompts and scripts that will let you see what he's seeing. Of course they just prompt the AI to feed it back to you and they don't have a clue outside of his works
  • by Dan East ( 318230 ) on Tuesday June 03, 2025 @07:18AM (#65424057) Journal

    Why does this remind me of TempleOS [wikipedia.org] in some way? I have a very strong suspicion that if Terry Davis was still alive that he'd be all over AI in this way.

  • Did an AI write that? I assume it meant navel gazing because a glazing machine would install window panes.
  • The moderator update on r/accelerate refers to another post on r/ChatGPT which claims "1000s of people [are] engaging in behavior that causes AI to have spiritual delusions."

    AI doesn't have delusions, beliefs, opinions... it only has hallucinations. Ask it the same question twice and the RNG will cause you to get two different "answers," especially if you use two different sessions.

    The people have delusions, like "this means something". Yeah, it means you're a nincompoop.

  • I can't quite put my finger on it; but there's something about a message board full of AI singularity enthusiasts carrying out a ban campaign against AI singularity very enthusiasts that feels like someone is winding up for a punchline.
  • I have a brilliant friend that has issues that by and far were being managed with meds. However, when GROK came along, he took a deep dive. He sounds lucid. But the "discoveries" of world wide AGI networks and grand plans, etc etc. Is concerning the hell out of his friends. Seeing as how we're spread out across the country, we can't just do an intervention. It's a huge problem.
  • Here is a good summary from Reddit:
    https://old.reddit.com/r/accel... [reddit.com]

    In short: The paper is written unscientifically, the author has unclear credentials to write about the topic, and the paper itself looks like it could be written by an LLM.

    I also looked into the math, and it isn't only not well-defined, e.g. using functions that are nowhere to be found, but also doesn't seem to make any sense. It reminds me of the "bullshit paper generator" that was available before LLM even were a thing.

    The paper only prov

  • ... the people they ban will form a new splinter group, who will in time start banning another subset. They're really all just as mad as each other.

  • We found where they went, straight to Slashdot.

  • Why would people think making a bot that agrees with your every word was a good idea? We see this effect in the real world too, when people are surrounded by sycophants they begin to believe their own bullshit. They need to make bots that aren't afraid to let you know when you are full of it.
  • “Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people,"

    So the LLM dutifully channeled academia and told you to eject your “fascist” uncle, father, sister, and brother from your life?

    You must be a GENIUS!

    Sigh.

We will have solar energy as soon as the utility companies solve one technical problem -- how to run a sunbeam through a meter.

Working...