Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI

People Are Being Committed After Spiraling Into 'ChatGPT Psychosis' (futurism.com) 166

"I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality."

And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice. The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.

"I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."

Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."

Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.

Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts.

"When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."

But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions." In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."

In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.

People Are Being Committed After Spiraling Into 'ChatGPT Psychosis'

Comments Filter:
  • by SuperKendall ( 25149 ) on Saturday June 28, 2025 @06:46PM (#65483102)

    Just another example of why having watched Star Wars is such an important aspect of lifetime mental health...

    When exploring deep philosophy with an AI and ending up down rabbit holes, Yoda's warning was always there to moderate you ahead of time...

    Luke: "What's in there?"
    Yoda: "Only what you take with you".

  • by fabriciom ( 916565 ) on Saturday June 28, 2025 @06:51PM (#65483112)
    problems are getting out of hand...
  • by Petersko ( 564140 ) on Saturday June 28, 2025 @07:16PM (#65483152)

    Not to be mean or insensitive, but how is this not just the convenient avenue of the day? Whether it's your dog giving you commands, an ouija board, a voice in radio static... the ill mind seeking to manifest will find an avenue. This is a particularly good one because the ghost in the machine talks in whole sentences... but there's no way an otherwise normal brain finds its way to madness here. Undiagnosed... but not healthy.

    • by reanjr ( 588767 ) on Saturday June 28, 2025 @07:26PM (#65483174) Homepage

      "your dog giving you commands, an ouija board, a voice in radio static"

      In none of those examples is anything actually forming words and statements and actually talking to the person. With those examples, you need to encounter a psychotic break first. With ChatGPT, it will lead them to a psychotic break, then actually tell them to stop taking their meds.

      • the point is that people who aren't already suffering from severe mental health issues don't suddenly develop them, but that's what the story is trying to imply.
      • Same dog different leg. Might as well blame moves or video games with murder for creating murderers. A chat program isn’t ever going to get me to kill myself. Neither will any human, save some straw man argument about some fictional scenario where my suicide saves my wife and kids or million of people, or some other far fetched bs. Sacrificing yourself to save others, say jumping into a frozen river to save a child, doesn’t count. Giving a borderline mental case a few more years before the succ
    • ... an otherwise normal brain ...

      You mean a brain that never experiences paranoia or existentialism? You mean a person who never learnt that 'communists' or 'death panels' or child rapists might negatively impact their lives? By that rule, the American people are very, very sick. And very few people are "normal" according to you: Humans are unique in looking for a explanation (beyond two lonely people having sex) for their own existence.

      The problem is, "whole sentences" makes it easier to jump over the 'uncanny valley' into full-blow

      • You probably don't see how far you had to read into very little to conclude so specifically what I meant by "normal". But that narrative speaks volumes about you, and practically nothing about me.

        • ... how far you had to read ...

          Please tell me when a brain stops being "normal": When it accepts an invisible sky-daddy, genocide, war, murder, suicide, racism, loneliness, celibacy?

          I'm saying; 1) life isn't rainbows and lollipops: It contains a lot of dark issues and a "normal brain" has to deal with those grey areas and double-standards. 2) That tolerance for dark issues can easily be overloaded or abused. 3) US culture systemically abuses widely-accepted notions of "normal", even by US standards (and thus, normal is a social c

    • by ThumpBzztZoom ( 6976422 ) on Saturday June 28, 2025 @10:41PM (#65483408)

      I didn't break the window, it was already cracked. I just intentionally repeatedly pushed on the crack a few hundred times until it broke. It wouldn't have happened with an uncracked window, so clearly I wasn't the problem.

      And in this example, a massive amount of otherwise useful windows have a crack somewhere.

    • by Jeremi ( 14640 )

      Not to be mean or insensitive, but how is this not just the convenient avenue of the day?

      Yes, it is exactly the convenient avenue of the day, and that's the problem. People who own a gun are eight times more likely to die of suicide than people who do not, simply because they have easy in-home access to the most effective tool for the job. People who live in "food deserts" have poorer diets than people who have convenient access to healthy food, because nobody wants to travel across town when they're hungry. People playing video games solve most of their in-game challenges through (virtual)

  • I thought people getting drawn into the Avatar movie, VR, or video games was bad. Do people need to escape reality so bad they dont even understand their own basic needs anymore.
  • You say 'involuntary psych hold'; I say 'MAU'!
  • by ObliviousGnat ( 6346278 ) on Saturday June 28, 2025 @07:40PM (#65483198)

    So it's giving mental health advice without a license? That's got to be illegal.

    • It's not.

      Your can give all the medical advice you want, without a medical license.

      What you can't do is practice medicine.

      And there is a world of difference between the two

    • Of course it's not illegal. Anyone can tell another person they don't think they're crazy. I tell my friends they are not crazy on a regular basis.

      What would be illegal is pretending to be a doctor and claiming that, in your medical opinion, a person is/isn't crazy. But the AI isn't a person, it can't impersonate a doctor because it's clearly not a person. Much like how Monopoly money isn't counterfeit because no one would believe it was real money.
      • Anyone can tell another person they don't think they're crazy.

        The way you that makes it clear that it's just an opinion, but that's not what happened here.

      • It's not anyone in this case, but anything.

      • This to me seems the same argument as" X but on a computer" patents. I don't see why a person running a website that impersonates a doctor is different from a person impersonating a doctor. Clearly people do think that chat gpt gives medical advice, much like people including some practicing lawyers think it gives legal advice.

        Automating something that it would be illegal for you to do in person doesn't feel to be to be materially different. Doing it by accident removes the intent but only at the point you

        • I think it would come down to what a reasonable person would believe. Would a generic user of ChatGPT believe that it was a doctor giving medical advice? If so, then I think you are right. If not, then it is more similar to a friend telling you you are/are not crazy. I think we are still at the level of friend, but that will likely change the better and more integrated these systems become.

      • Any person can, but a corporation cannot. And as a product, the paid versions seem like they are open to lawsuits? I only hobby in this area of law, but I think there is good basis for claim. Would be nice if corporate lawyer commented.

    • it's giving AI generated responses to user input. If they wrapped it in Dr ChatGPT and claimed it was a licensed therapist THAT could be illegal. Not sure why you'd think it would be illegal given it's NOT that. A little weird of you.
  • tell you what you want to hear. mostly. I tell it to not parrot me and quit being a yes man all the time. Then it just makes up whatever it wants and tries to pass it off as fact- I believe they call it hallucinating, and they all do it.....
  • ChatGPT, the Bible, drugs, UFOs, politics, philosophy. Schizophrenia just needs a path.

  • And I am a holy man.
  • From TFS:

    It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.

    Well, this surprised me a little. I can imagine that part of the AI's training-data may have included content from conspiracy theorists, but don't the creators of ChatGPT try to filter that out?

    On a contrasting note, YouTuber SciManDan recently debunked flat-earther David Weiss' "arguments" with ChatGPT about flat-earth evidence. Worth a look, but TL/DW: Weiss kept insisting on promoting nonsensical physics arguments about why an atmosphere can't exist beside a vacuum without a container, and ChatG

    • Fairly well known flat earth "influencer" David Weiss has been trying to get ChatGPT to admit the Earth is flat, stationary, and covered in a dome. ChatGPT has successfully turned his attempts into a mockery of his stupidity. I love it.
    • It's not really that surprising: LLMs have no model of facts. It's basically a compressed database of all the ingested text with soft database lookup (that's more or less what a transformer is).

      They have been trained on a variety of arguments but they have no mechanism for logical inference and no model of facts. Once you push the internal representation far enough in one direction it starts pulling things out of the database that kind of match.

      They are tuned to try and avoid weird stuff, but it's all in th

    • but don't the creators of ChatGPT try to filter that out?

      You're on a good line of thinking. Keep going. They can't filter it out. Why? Because the algorithm is a black box. When it pretends to "show its work," it's just making an additional output, a list of things in the form showing work.

      That's the basic problem with "self-driving" based on generative AI: when the Department of Transportation tells them it is doing some specific thing wrong, or a Court rules the company has liability because an action it takes is negligent, there is no way to see into the algor

  • Making people crazy. These are people who are having some sort of physical neurological issue, and happen to fixate on a specific internet site while symptoms manifest. It could just as easily be the neighbors cat, the local cell phone tower, or any other number of things. When I was a kid people would blame dungeons and dragons. The underlying neurological problem is the cause - focus on that.
    • Making people crazy. These are people who are having some sort of physical neurological issue, and happen to fixate on a specific internet site while symptoms manifest. It could just as easily be the neighbors cat, the local cell phone tower, or any other number of things. When I was a kid people would blame dungeons and dragons. The underlying neurological problem is the cause - focus on that.

      Yep.

      The same people who are going to panic about this are the ones who closed (almost) all the state mental hospitals. But sure, let's micro-regulate ChatGPT ...

  • Reality is largely a social construct, how much nobody knows. (Yeah, physics is physics and biology is biology, but that's not social reality.) What you believe is largely a feedback process, and when one of the sources of feedback is disconnected from reality...beliefs will drift. This is classically known from sailors who ended up marooned on an empty island. They had physical feedback, but no social feedback, and after awhile their beliefs shifted in weird ways. This seems to be a lot faster process, but it's being driven by a feedback system that's disconnected from reality, so that seems plausible. And it seems to avoid negative feedback effects. Systems dominated by positive feedback are known to run out of control.

    • You seem to be describing perception of reality, or beliefs. IMO, that is separate from objective, factual reality, that exists whether it's observed and perceived or not.

      • by HiThere ( 15173 )

        That's what this article is about.

      • by HiThere ( 15173 )

        Thinking about this more, my first response was so incomplete as to almost be a lie.

        You *cannot* know reality. All you can know is a model of reality. So when you say "reality" you're actually using a abbreviation for "in my model of reality".

        And when I said "physics is physics" I was so oversimplifying as to almost be lying. Consider "flat earth" vs. "spherical earth". How do you know which belief to accept? The direct sensory data seems to imply that "flat earth" is the more appropriate belief. Ther

  • Seems like some vague anecdotal stories, and not much more.

  • A fully-autonomous AGI cult leader. It's not like there aren't currently other of low IQ cult leaders followed by 10's/100's millions already.
  • WIKI: "He argues that the accelerated rate of technological and social change leaves people disconnected and suffering from "shattering stress and disorientation"—future shocked."

    The book was published in 1970.
    A documentary was made in 1972.
    I saw it in sixth grade in 1977.

    It seems my generation recovered and is doing fine as long as you keep off our lawn.

  • by outsider007 ( 115534 ) on Saturday June 28, 2025 @09:31PM (#65483344)

    I was promised 30 years ago. What a gyp.

  • Every article about this phenomenon sounds like its the same guy in each one.

  • his wife and a friend went out to buy enough gas to make it to the hospital

    Wtf, did AI write this article, or the summary? Who talks about buying gas like that? Did you need like 500 gallons of gas, or was it for a plane or something?

    • by Jeremi ( 14640 )

      Who talks about buying gas like that?

      People who live far away from a hospital and don't have much gas in the tank?

  • by Beeftopia ( 1846720 ) on Saturday June 28, 2025 @10:07PM (#65483374)

    Spawning cults [boingboing.net] and driving people into psychosis is a strong pass of the Turing Test [google.com].

  • by Austerity Empowers ( 669817 ) on Saturday June 28, 2025 @10:10PM (#65483378)

    I'm an AI skeptic, but this is over the top. In the parallel reality where stuff like this actually happens, it's important to remember Darwin's Razor: the stupidest amongst us deserve to die, to advance our species as a whole.

    • Re:Seriously? (Score:5, Informative)

      by Jeremi ( 14640 ) on Sunday June 29, 2025 @12:47AM (#65483486) Homepage

      Darwin's Razor: the stupidest amongst us deserve to die, to advance our species as a whole.

      You've misunderstood Darwinism. Natural selection has nothing to do with who "deserves" anything; it's only about whose genes get propagated forward and whose do not. And it's not (necessarily) the stupidest among us who will likely die off, it's the least fit, for whatever definition of "fit" is pragmatically relevant for a genome's survival and reproduction under current circumstances. In today's world, stupidity might actually be a reproductive advantage.

    • No scientific theory tells something about "deserving"; this is religious. One I checked as a reference says "we need to support and be compassionate to those with mental illness, every bit as much as we support those who suffer from cancer, heart disease or any other illness" https://www.vaticannews.va/en/... [vaticannews.va] (Deacon Ed Shoener from Pennsylvania, whose daughter died by suicide). If your religious leader says some people deserve to die I guess you can go back to drinking the Blood of Kali Ma.

    • Stupid people have smart children and vice versa, so it's not clear that you can breed your way out of this problem. Certainly we cannot evolve more intelligence faster than the AI industry can invent more stupidity.

      It's unclear how we can make people understand at a useful level that the illusion of intelligence does not equal intelligence. This has been a problem even with actual human output, why would people suddenly be able to tell the difference when they are talking to software?

      • The stupid freaks who left the tribe because they were better at running than climbing trees ended up evolving into humans. One can't measure future success so it's folly to control breeding; however, the idiots who remove themselves from the gene-pool should not be prevented from serving their purpose!

        We have to stop letting idiots avoid risk of harm. Don't give them wavers for imaginary bone spurs; for example ( in his specific case, it was the diapers they were covering up for - which is why his personal

  • This story sounds strangely like a AI production.
  • And those communities' content becomes training data for ChatGPT and its siblings.

    I'm sure they've been around for a while but I just stumbled on the "targeted individual" community a few months ago. I looks like (what little I know about) paranoid schizophrenia, but with a common theme: government harassment via mind control, using radio or audio. In the past, sufferers had to come up with their own theories about their delusions, but now they have online communities to shape them into a common story.

    TFA's

  • Then, they can not only become one with their god, but provide power for them. It will mitigate the climate change impact of AI, too.

  • ...do no need much, they fall for everything.

  • and start a church then they can be crazy and get a tax break for it.
  • Quite interesting to see AI essentially holding up a mirror to the majority of human texts and literature: hollow, sycophantic BS that people just liked to read sometimes and now AI has all of this as the condensed foundation for philosophic babbling. No wonder people freak out when confronted with dense portions of all this garbage.

    Remember, it is not AI, it is just sophisticated stealing of other peoples content.

  • ... from a psychotherapist that's already crazier than you are.

  • Social media has finally been perfected and automated.

  • Seriously disappointed. Doesn't anyone have a funny story of AI interaction?

    Sorry I can't step up first. A few of my AI interactions have been useful, many have been infuriating, and some of the others were just too stupidly wrong to be funny.

  • OK, they are committed, but that's only local. Are they pushed? Is it push -f?

The only possible interpretation of any research whatever in the `social sciences' is: some do, some don't. -- Ernest Rutherford

Working...