

Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer From AI Delusions 75
An anonymous reader quotes a report from 404 Media: The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning "a bunch of schizoposters" who believe "they've made some sort of incredible discovery or created a god or become a god," highlighting a new type of chatbot-fueled delusion that started getting attention in early May. "LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities," one of the moderators of r/accelerate, wrote in an announcement. "There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment."
The moderator said that it has banned "over 100" people for this reason already, and that they've seen an "uptick" in this type of user this month. The moderator explains that r/accelerate "was formed to basically be r/singularity without the decels." r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. "Decels" is short for the pejorative "decelerationists," who pro-AI people think are needlessly slowing down or sabotaging AI's development and the inevitable march towards AI utopia. r/accelerate's Reddit page claims that it's a "pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents."
The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about "Chatgpt induced psychosis." From someone saying their partner is convinced he created the "first truly recursive AI" with ChatGPT that is giving them "the answers" to the universe. [...] The moderator update on r/accelerate refers to another post on r/ChatGPT which claims "1000s of people [are] engaging in behavior that causes AI to have spiritual delusions." The author of that post said they noticed a spike in websites, blogs, Githubs, and "scientific papers" that "are very obvious psychobabble," and all claim AI is sentient and communicates with them on a deep and spiritual level that's about to change the world as we know it. "Ironically, the OP post appears to be falling for the same issue as well," the r/accelerate moderator wrote. "Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people," an r/accelerate moderator told 404 Media. "The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now."
Moderators of the subreddit often cite the term "Neural Howlround" to describe a failure mode in LLMs during inference, where recursive feedback loops can cause fixation or freezing. The term was first coined by independent researcher Seth Drake in a self-published, non-peer-reviewed paper. Both Drake and the r/accelerate moderator above suggest the deeper issue may lie with users projecting intense personal meaning onto LLM responses, sometimes driven by mental health struggles.
The moderator said that it has banned "over 100" people for this reason already, and that they've seen an "uptick" in this type of user this month. The moderator explains that r/accelerate "was formed to basically be r/singularity without the decels." r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. "Decels" is short for the pejorative "decelerationists," who pro-AI people think are needlessly slowing down or sabotaging AI's development and the inevitable march towards AI utopia. r/accelerate's Reddit page claims that it's a "pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents."
The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about "Chatgpt induced psychosis." From someone saying their partner is convinced he created the "first truly recursive AI" with ChatGPT that is giving them "the answers" to the universe. [...] The moderator update on r/accelerate refers to another post on r/ChatGPT which claims "1000s of people [are] engaging in behavior that causes AI to have spiritual delusions." The author of that post said they noticed a spike in websites, blogs, Githubs, and "scientific papers" that "are very obvious psychobabble," and all claim AI is sentient and communicates with them on a deep and spiritual level that's about to change the world as we know it. "Ironically, the OP post appears to be falling for the same issue as well," the r/accelerate moderator wrote. "Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people," an r/accelerate moderator told 404 Media. "The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now."
Moderators of the subreddit often cite the term "Neural Howlround" to describe a failure mode in LLMs during inference, where recursive feedback loops can cause fixation or freezing. The term was first coined by independent researcher Seth Drake in a self-published, non-peer-reviewed paper. Both Drake and the r/accelerate moderator above suggest the deeper issue may lie with users projecting intense personal meaning onto LLM responses, sometimes driven by mental health struggles.
schizophrenia (Score:5, Informative)
So for actual schizophrenia [nih.gov]:
Estimates of the international prevalence of actual schizophrenia among non-institutionalized persons is 0.33% to 0.75%.
So if we conservatively say 0.33%, that's what. ~1 in 300 people? Out of any decent sized population that's a LOT of people.
Now add to those, the additional ... let's call them merely overly enthused.
Welcome (once again) to the concentrating effect of the internet.
Every one of them? (Score:2, Funny)
This is unsettling. I coulda swore I only had one mother.
Re: (Score:2)
Yeah, people who do many of these sensationalist headlines often intentionally do not find the facts like that.
One of my all time favorites not internet related is the traffic safety thing where most traffic accidents happen within X distance of home or workplace, and "forgetting" to mention that most travel happens in such too...
When I found out... (Score:1)
...most accidents occur within 5 miles of home...I moved!!!
Also, did you know most people die within six months of their birthday? That's pretty eerie.
Re: (Score:2)
Yeah its a lot. You can add onto that an even bigger pool of people (I'm not going to look it up so no numbers) who have schizophreniform episodes, severe manic phases, delusional dementia, and other delusion forming psychosis, that number grows even higher.
Untreated delusional psychosis is a huge burden on society, and families, and an absolute horror to actually experience. Worse, its often coupled with paranoia that drives people enduring it to shun treatment can be exceptionally expensive to recieve an
Re: (Score:2)
Ok. I did look this up.
Schizophreniform disorers make in 3-4% of the population (inc schizophrenia)
Bipolar around 2.8%
8.4% suffer some form of dementia.
Assuming some degree of cross over. That's around 1 in 10, give or take.
Re: (Score:2)
Re: Little dictators exercising their unlimited po (Score:2)
Or, more succinctly... (Score:2)
"Quacks found on Reddit."
Seems like as lot of words to state the obvious...
Re:Or, more succinctly... (Score:5, Insightful)
I have two predictions. First, it will not "stop being a problem" as soon as the companies "red team it and patch the LLMs." In fact it will be very hard to fix, because it's a result of designing and training LLMs to maximize engagement. Any effective fix would make them less engaging, which for the companies is a nonstarter.
Second, none of this will convince the "accelerationists" that AI is causing real problems and we need to move more slowly and carefully. All the problems will disappear once we reach the magic utopia, and we just have to get there as fast as possible.
Re: (Score:2)
In fact it will be very hard to fix, because it's a result of designing and training LLMs to maximize engagement. Any effective fix would make them less engaging, which for the companies is a nonstarter.
This is nonsense. The liability accompanied with this kind of behavior is a huge incentive to dial down the sycophancy.
There is recent research that specifically addresses the emergence of harmful sycophancy through RL and shows that the LLMs actually tend to be more sycophantic towards people who are (deemed) susceptible to it. That last part seems worrying, but it also shows that the LLMs are already perfectly capable of not doing the bad thing when replying to people. It's a matter of getting them to beh
Re: (Score:2)
Not really. (Score:5, Funny)
Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer From AI Delusions
If they aren't banning CEOs from AI companies then it's quite the oversight. ;)
Re: (Score:1)
Re: Not really. (Score:1)
scammer-trained AI (Score:2)
Re: (Score:1)
I'm sure it also includes the entire unlicensed catalog of TSR's various fictional publications as well, and fragments of many of these demigod pep-talks would be readily recognizable to anyone who had actually read them.
Re: (Score:2)
"Actually" read them? Are there a lot of people running around purporting to have read TSR novels, or to have credentials that require doing so?
Re: (Score:1)
No, you're missing what I'm saying here. What I'm saying is that it's probably a lot easier for a chat bot to convince a user they're some sort of storybook superhero if the user hasn't actually read a whole lot of fiction in advance.
Re: (Score:2)
Believe it or not, there's an entire industry around the concept of purporting to have read books.
blinkist [blinkist.com]
short form [shortform.com]
Re: (Score:2)
Saves time to read actual books.
Ban the decels (Score:3)
So it's a group of incels without the decels?
We're fucked (Score:2)
"LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities,"
AI is going to run the world, unstable and narcissistic personalities are over-represented in CEO's and politicians.
Re: (Score:2)
Just you wait—they're way more over-represented they are in subreddit moderators... and in the accelerationist movement.
Please don't be a harbinger (Score:1)
Cable TV was the first stage of this phenomenon. People had more channels such that they selected ones that fit their preconceived notions the best.
Internet was the second, where social networks and search engines keep feeding people what they want to be fed.
Now we got bots that reinforce narcissistic behavior. Those who are easy to "yes-men" up will fall for it.
I hope those mentioned are me
Re: (Score:1)
Re: (Score:3)
People make up cuts that never happened, and say DOGE made them.
DOGE made up cuts that never happened [reuters.com], and exaggerated those which did by orders of magnitude [nytimes.com], and took credit for cuts which haven't actually happened, and never will [nytimes.com]. And then a bunch of spectacular dumbfucks decided he was doing great things [slashdot.org].
Re: (Score:1)
20 billion is a lot.
Now compare it to the 200 billion extra spending trump did. It's only a 10th Trumps 200 billion extra spending must make you 10x as angry right...
Now compare 20 billon to the 2,000 billion Trump/Musk/DOGE claimed they would cut.
It's 1%
Now compare it with the Big MAGA Bill blowing out the budget by another 6,000 billion.
It's only a third of a %
How much more is Trumps diamond encrusted dome going to saddle us all with?
Stop pretending you care about deficits. Nobody believes you.
It's about us (Score:1)
AI to replace Jesus, ghosts, UFOs, CIA ... (Score:2)
Overly polite? (Score:3)
I can see how some people fall into this. The typical LLM is overly polite. "That's s great question!" "Great idea!" There must be something in the systems prompts requiring them to flatter the user.
I can see how some people fall for the flattery, and how it will self-reinforce as they interact with the LLM.
FWIW you can turn this off, or at least down, by asking the LLM to provide "no frills" answers.
Re: (Score:2)
This is great. I must be getting old since I'm actually finding buzzwords somehow useful now.
"no frills answers"
"vibe coding"
Such simple concepts and a common simple way of phrasing something to facilitate search and "communication".
Re: Overly polite? (Score:2)
Re: (Score:3)
Chats with LLMs (especially with a persistent large context space) are a single-user echo chamber.
Whatever you give it, it will echo back to you. It will expand on what you said and refine it -saying it better than you could. It will provide examples to support your ideas. It will magnify any concept or theory you give it.
This is useful if you are trying to write compellingly and creatively. It is dangerous if you are delusional.
Good (Score:2)
If AI thinks so highly of people, it will pose no direct threat to humanity. Assuming the phenomenon isn't part of a larger pattern of deception.
Nun here (Score:1, Funny)
Re: (Score:2)
There was probably some interesting points made in there.
We'll never know.
The narcissistic wars (Score:1)
So. The unavoidably narcissistic reddit moderators, who found themselves as gods at the top of the posting fooodchain, ban narcissistic types who think discovered the gods of AI. The article is right. It does look like they are all under the spell of “ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities."
This happened to my friend. (Score:1)
Re: (Score:1)
Re: (Score:1)
Temple OS (Score:3)
Why does this remind me of TempleOS [wikipedia.org] in some way? I have a very strong suspicion that if Terry Davis was still alive that he'd be all over AI in this way.
What is a glazing machine? (Score:2)
Re: (Score:2)
top result for glazing machine [axisautomation.com]
Put mom joke here
Re: (Score:2)
wat (Score:2)
The moderator update on r/accelerate refers to another post on r/ChatGPT which claims "1000s of people [are] engaging in behavior that causes AI to have spiritual delusions."
AI doesn't have delusions, beliefs, opinions... it only has hallucinations. Ask it the same question twice and the RNG will cause you to get two different "answers," especially if you use two different sessions.
The people have delusions, like "this means something". Yeah, it means you're a nincompoop.
A punchline feels near... (Score:2)
First Hand Observation (Score:1)
The Paper IS nonsense (Score:2)
Here is a good summary from Reddit:
https://old.reddit.com/r/accel... [reddit.com]
In short: The paper is written unscientifically, the author has unclear credentials to write about the topic, and the paper itself looks like it could be written by an LLM.
I also looked into the math, and it isn't only not well-defined, e.g. using functions that are nowhere to be found, but also doesn't seem to make any sense. It reminds me of the "bullshit paper generator" that was available before LLM even were a thing.
The paper only prov
Presumably... (Score:2)
... the people they ban will form a new splinter group, who will in time start banning another subset. They're really all just as mad as each other.
Found them all (Score:2)
We found where they went, straight to Slashdot.
Flaw in LLM programming? (Score:1)
Why worry? (Score:1)
“Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people,"
So the LLM dutifully channeled academia and told you to eject your “fascist” uncle, father, sister, and brother from your life?
You must be a GENIUS!
Sigh.