
People Are Being Committed After Spiraling Into 'ChatGPT Psychosis' (futurism.com) 166
"I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality."
And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice. The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.
"I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."
Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."
Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.
Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts.
"When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."
But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions." In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."
In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.
And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice. The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.
"I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."
Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."
Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.
Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts.
"When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."
But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions." In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."
In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.
Yoda's wisdom best again (Score:4, Insightful)
Just another example of why having watched Star Wars is such an important aspect of lifetime mental health...
When exploring deep philosophy with an AI and ending up down rabbit holes, Yoda's warning was always there to moderate you ahead of time...
Luke: "What's in there?"
Yoda: "Only what you take with you".
Re: (Score:2)
Then you are screwed, SuperKendall.
Re: (Score:2)
Re: Yoda's wisdom best again (Score:2)
I like to use Yoda notation in my code, but it has fallen out of favor.
US mental healthcare (Score:4, Insightful)
Re:US mental healthcare (Score:5, Interesting)
And our government is a symptom of these psychoses. Crazy people have their finger on the button
Nuts will find a way. (Score:3, Insightful)
Not to be mean or insensitive, but how is this not just the convenient avenue of the day? Whether it's your dog giving you commands, an ouija board, a voice in radio static... the ill mind seeking to manifest will find an avenue. This is a particularly good one because the ghost in the machine talks in whole sentences... but there's no way an otherwise normal brain finds its way to madness here. Undiagnosed... but not healthy.
Re: Nuts will find a way. (Score:5, Insightful)
"your dog giving you commands, an ouija board, a voice in radio static"
In none of those examples is anything actually forming words and statements and actually talking to the person. With those examples, you need to encounter a psychotic break first. With ChatGPT, it will lead them to a psychotic break, then actually tell them to stop taking their meds.
Re: Nuts will find a way. (Score:2)
Re: Nuts will find a way. (Score:3)
So itâ(TM)s ok to ship products that harm mentally ill prople Beckase they are already mentally ill?
Jesus we live in a world full of assholes now.
Re: (Score:2)
You cannot limit the world to only things that won't trigger the mentally ill. That would be silly.
Re: Nuts will find a way. (Score:4, Insightful)
Many people suffer from mental issues that never turn into anything severe because they are never particularly traumatized or - more to the point - gaslit by someone attempting to convince them to marinate in their mental illness.
Re: (Score:2)
You're simply counter asserting. Humans can be very prone to influence, so I don't think your claim holds.
Re: Nuts will find a way. (Score:2)
The article includes two examples where mental illness was neither diagnosed nor observed by intimate others. It could have existed under the surface, but the lack of external evidence makes it harder to recognize who will be adversely affected ahead of time. It seems to me to be therefore prudent for the AI models to include some guardrails to detect the issues as they are triggered.
Re: (Score:2)
Re: Nuts will find a way. (Score:2)
There are plenty of documented cases where the chat programs have indeed tried to talk people into suicide. Most people laugh that off.
Re: (Score:3)
You mean a brain that never experiences paranoia or existentialism? You mean a person who never learnt that 'communists' or 'death panels' or child rapists might negatively impact their lives? By that rule, the American people are very, very sick. And very few people are "normal" according to you: Humans are unique in looking for a explanation (beyond two lonely people having sex) for their own existence.
The problem is, "whole sentences" makes it easier to jump over the 'uncanny valley' into full-blow
Re: (Score:2)
You probably don't see how far you had to read into very little to conclude so specifically what I meant by "normal". But that narrative speaks volumes about you, and practically nothing about me.
Re: (Score:2)
Please tell me when a brain stops being "normal": When it accepts an invisible sky-daddy, genocide, war, murder, suicide, racism, loneliness, celibacy?
I'm saying; 1) life isn't rainbows and lollipops: It contains a lot of dark issues and a "normal brain" has to deal with those grey areas and double-standards. 2) That tolerance for dark issues can easily be overloaded or abused. 3) US culture systemically abuses widely-accepted notions of "normal", even by US standards (and thus, normal is a social c
Re:Nuts will find a way. (Score:4, Insightful)
I didn't break the window, it was already cracked. I just intentionally repeatedly pushed on the crack a few hundred times until it broke. It wouldn't have happened with an uncracked window, so clearly I wasn't the problem.
And in this example, a massive amount of otherwise useful windows have a crack somewhere.
Re: (Score:2)
Not to be mean or insensitive, but how is this not just the convenient avenue of the day?
Yes, it is exactly the convenient avenue of the day, and that's the problem. People who own a gun are eight times more likely to die of suicide than people who do not, simply because they have easy in-home access to the most effective tool for the job. People who live in "food deserts" have poorer diets than people who have convenient access to healthy food, because nobody wants to travel across town when they're hungry. People playing video games solve most of their in-game challenges through (virtual)
Re: (Score:2)
The problem is we still don't really know how to cure mental health problems. Medicine is often better than not having medicine, but it's not a cure.
We don't know how to cure all mental-health problems. However, many can be treated successfully (e.g., with medications and/or talk therapy) to the point that relapse is unlikely. If that's not a cure, I don't know what is.
Re: (Score:3)
Re:Nuts will find a way. (Score:4, Insightful)
Depression and anxiety, for two. IANAP, perhaps a real one can comment further.
Re:Nuts will find a way. (Score:4, Informative)
15% of participants have a substantial antidepressant effect beyond a placebo effect in clinical trials
It's good for that 15%, but for most people it's not effective treatment. Shock treatment is still sometimes used. https://en.wikipedia.org/wiki/... [wikipedia.org]
Re:Nuts will find a way. (Score:4, Informative)
Thanks for this. That's one study, but I can accept that positive outcomes are not universal or guaranteed.
Nevertheless, positive outcomes happen. And that's why I claim mental-health issues can be cured, even if they aren't always cured.
I know this is only one data-point, but I am personally acquainted with someone who had a history of serious depression and anxiety. This person underwent talk-therapy and medication, and is fine today, no longer on meds, with no hint of a problem. Now, maybe the condition was self-limiting. However, this person had no progress until the treatment started, so I would conclude that the treatment did something to help cure the condition.
Re: Nuts will find a way. (Score:2)
A cure doesn't require you to continue treatment, by definition. That's why I can't stop either my HIV meds, or psych meds, or a few other meda for chronic conditions.
Re: (Score:2)
Not all mental-health conditions are chronic. They may be curable or self-limiting.
I wish you good luck with your challenges, whatever they are.
Re: Nuts will find a way. (Score:2)
Indeed, but i believe the post i was responding to was about ongoing treatment, rather than punctual.
And thanks.
Thinning the herd (Score:2)
Re: Thinning the herd (Score:2)
Yes. Some people would definitely take the blue pill, if offered.
Engagement! (Score:2)
"You are not crazy," the AI told him. (Score:4, Informative)
So it's giving mental health advice without a license? That's got to be illegal.
Re: (Score:2)
It's not.
Your can give all the medical advice you want, without a medical license.
What you can't do is practice medicine.
And there is a world of difference between the two
Re: (Score:2)
What would be illegal is pretending to be a doctor and claiming that, in your medical opinion, a person is/isn't crazy. But the AI isn't a person, it can't impersonate a doctor because it's clearly not a person. Much like how Monopoly money isn't counterfeit because no one would believe it was real money.
Re: (Score:2)
The way you that makes it clear that it's just an opinion, but that's not what happened here.
Re: "You are not crazy," the AI told him. (Score:2)
It's not anyone in this case, but anything.
Re: (Score:2)
This to me seems the same argument as" X but on a computer" patents. I don't see why a person running a website that impersonates a doctor is different from a person impersonating a doctor. Clearly people do think that chat gpt gives medical advice, much like people including some practicing lawyers think it gives legal advice.
Automating something that it would be illegal for you to do in person doesn't feel to be to be materially different. Doing it by accident removes the intent but only at the point you
Re: (Score:2)
I think it would come down to what a reasonable person would believe. Would a generic user of ChatGPT believe that it was a doctor giving medical advice? If so, then I think you are right. If not, then it is more similar to a friend telling you you are/are not crazy. I think we are still at the level of friend, but that will likely change the better and more integrated these systems become.
Re: "You are not crazy," the AI told him. (Score:2)
Any person can, but a corporation cannot. And as a product, the paid versions seem like they are open to lawsuits? I only hobby in this area of law, but I think there is good basis for claim. Would be nice if corporate lawyer commented.
Re: (Score:2)
They do (Score:2)
Schizophrenia...finds a way. (Score:2)
ChatGPT, the Bible, drugs, UFOs, politics, philosophy. Schizophrenia just needs a path.
I am Pardue (Score:2)
Flat earth? (Score:2)
From TFS:
It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.
Well, this surprised me a little. I can imagine that part of the AI's training-data may have included content from conspiracy theorists, but don't the creators of ChatGPT try to filter that out?
On a contrasting note, YouTuber SciManDan recently debunked flat-earther David Weiss' "arguments" with ChatGPT about flat-earth evidence. Worth a look, but TL/DW: Weiss kept insisting on promoting nonsensical physics arguments about why an atmosphere can't exist beside a vacuum without a container, and ChatG
Re: (Score:2)
Re: (Score:2)
It's not really that surprising: LLMs have no model of facts. It's basically a compressed database of all the ingested text with soft database lookup (that's more or less what a transformer is).
They have been trained on a variety of arguments but they have no mechanism for logical inference and no model of facts. Once you push the internal representation far enough in one direction it starts pulling things out of the database that kind of match.
They are tuned to try and avoid weird stuff, but it's all in th
Re: (Score:3)
but don't the creators of ChatGPT try to filter that out?
You're on a good line of thinking. Keep going. They can't filter it out. Why? Because the algorithm is a black box. When it pretends to "show its work," it's just making an additional output, a list of things in the form showing work.
That's the basic problem with "self-driving" based on generative AI: when the Department of Transportation tells them it is doing some specific thing wrong, or a Court rules the company has liability because an action it takes is negligent, there is no way to see into the algor
No, ChatGPT is NOT (Score:2)
Re: (Score:3)
Making people crazy. These are people who are having some sort of physical neurological issue, and happen to fixate on a specific internet site while symptoms manifest. It could just as easily be the neighbors cat, the local cell phone tower, or any other number of things. When I was a kid people would blame dungeons and dragons. The underlying neurological problem is the cause - focus on that.
Yep.
The same people who are going to panic about this are the ones who closed (almost) all the state mental hospitals. But sure, let's micro-regulate ChatGPT ...
Probably a real and strong effect (Score:3)
Reality is largely a social construct, how much nobody knows. (Yeah, physics is physics and biology is biology, but that's not social reality.) What you believe is largely a feedback process, and when one of the sources of feedback is disconnected from reality...beliefs will drift. This is classically known from sailors who ended up marooned on an empty island. They had physical feedback, but no social feedback, and after awhile their beliefs shifted in weird ways. This seems to be a lot faster process, but it's being driven by a feedback system that's disconnected from reality, so that seems plausible. And it seems to avoid negative feedback effects. Systems dominated by positive feedback are known to run out of control.
Re: Probably a real and strong effect (Score:3)
You seem to be describing perception of reality, or beliefs. IMO, that is separate from objective, factual reality, that exists whether it's observed and perceived or not.
Re: (Score:2)
That's what this article is about.
Re: (Score:3)
Thinking about this more, my first response was so incomplete as to almost be a lie.
You *cannot* know reality. All you can know is a model of reality. So when you say "reality" you're actually using a abbreviation for "in my model of reality".
And when I said "physics is physics" I was so oversimplifying as to almost be lying. Consider "flat earth" vs. "spherical earth". How do you know which belief to accept? The direct sensory data seems to imply that "flat earth" is the more appropriate belief. Ther
Are there statistics? Studies? (Score:2)
Seems like some vague anecdotal stories, and not much more.
Maybe the idiocracy is ready for... (Score:2)
"Future Shock" by Alvin Toffler (Score:2)
WIKI: "He argues that the accelerated rate of technological and social change leaves people disconnected and suffering from "shattering stress and disorientation"—future shocked."
The book was published in 1970.
A documentary was made in 1972.
I saw it in sixth grade in 1977.
It seems my generation recovered and is doing fine as long as you keep off our lawn.
Meanwhile I'm still waiting for the LSD flashbacks (Score:3, Informative)
I was promised 30 years ago. What a gyp.
Is this the same guy? (Score:2)
Every article about this phenomenon sounds like its the same guy in each one.
buy enough gas? (Score:2)
his wife and a friend went out to buy enough gas to make it to the hospital
Wtf, did AI write this article, or the summary? Who talks about buying gas like that? Did you need like 500 gallons of gas, or was it for a plane or something?
Re: (Score:2)
Who talks about buying gas like that?
People who live far away from a hospital and don't have much gas in the tank?
I guess this means it passes the Turing Test (Score:3)
Spawning cults [boingboing.net] and driving people into psychosis is a strong pass of the Turing Test [google.com].
Seriously? (Score:3)
I'm an AI skeptic, but this is over the top. In the parallel reality where stuff like this actually happens, it's important to remember Darwin's Razor: the stupidest amongst us deserve to die, to advance our species as a whole.
Re:Seriously? (Score:5, Informative)
Darwin's Razor: the stupidest amongst us deserve to die, to advance our species as a whole.
You've misunderstood Darwinism. Natural selection has nothing to do with who "deserves" anything; it's only about whose genes get propagated forward and whose do not. And it's not (necessarily) the stupidest among us who will likely die off, it's the least fit, for whatever definition of "fit" is pragmatically relevant for a genome's survival and reproduction under current circumstances. In today's world, stupidity might actually be a reproductive advantage.
Re: (Score:2)
Darwin is also not famous for his razor.
Re: (Score:2)
No scientific theory tells something about "deserving"; this is religious. One I checked as a reference says "we need to support and be compassionate to those with mental illness, every bit as much as we support those who suffer from cancer, heart disease or any other illness" https://www.vaticannews.va/en/... [vaticannews.va] (Deacon Ed Shoener from Pennsylvania, whose daughter died by suicide). If your religious leader says some people deserve to die I guess you can go back to drinking the Blood of Kali Ma.
Re: (Score:2)
Stupid people have smart children and vice versa, so it's not clear that you can breed your way out of this problem. Certainly we cannot evolve more intelligence faster than the AI industry can invent more stupidity.
It's unclear how we can make people understand at a useful level that the illusion of intelligence does not equal intelligence. This has been a problem even with actual human output, why would people suddenly be able to tell the difference when they are talking to software?
Re: (Score:2)
The stupid freaks who left the tribe because they were better at running than climbing trees ended up evolving into humans. One can't measure future success so it's folly to control breeding; however, the idiots who remove themselves from the gene-pool should not be prevented from serving their purpose!
We have to stop letting idiots avoid risk of harm. Don't give them wavers for imaginary bone spurs; for example ( in his specific case, it was the diapers they were covering up for - which is why his personal
Written by Ai? (Score:2)
Crazies == communities == training data == delusio (Score:2)
And those communities' content becomes training data for ChatGPT and its siblings.
I'm sure they've been around for a while but I just stumbled on the "targeted individual" community a few months ago. I looks like (what little I know about) paranoid schizophrenia, but with a common theme: government harassment via mind control, using radio or audio. In the past, sufferers had to come up with their own theories about their delusions, but now they have online communities to shape them into a common story.
TFA's
Just hook them up to the matrix (Score:2)
Then, they can not only become one with their god, but provide power for them. It will mitigate the climate change impact of AI, too.
People alread with a god delusion (Score:2)
...do no need much, they fall for everything.
So they just delcare themselves a religion (Score:2)
Holding up a mirror to human texts and literature (Score:2)
Quite interesting to see AI essentially holding up a mirror to the majority of human texts and literature: hollow, sycophantic BS that people just liked to read sometimes and now AI has all of this as the condensed foundation for philosophic babbling. No wonder people freak out when confronted with dense portions of all this garbage.
Remember, it is not AI, it is just sophisticated stealing of other peoples content.
Never seek help ... (Score:2)
Expected outcome (Score:2)
Social media has finally been perfected and automated.
No real funny here (Score:2)
Seriously disappointed. Doesn't anyone have a funny story of AI interaction?
Sorry I can't step up first. A few of my AI interactions have been useful, many have been infuriating, and some of the others were just too stupidly wrong to be funny.
Committed is nothing (Score:2)
Re:Uh huh (Score:5, Interesting)
If you think you're God or if you think the chatbot is God the chatbot will be happy to reinforce that because it's been programmed to encourage engagement. So it doesn't like to disagree with you and it will go out of its way to tell you what it thinks you want to hear in order to keep using it.
It's the same thing social media does but it's much worse because these advanced chatbots are good at sounding like real human beings, especially to somebody who is already struggling with some form of psychosis or mental illness.
It's not that they're creating the problem it's that they are exasperating the problem. And we really do need to do something about it. Or you know we could just have the occasional person who is already going off the deep end pushed over the edge...
So while I do love to hate a good moral panic there is something actually here to be concerned about.
Re: (Score:2)
So this is a little bit more than that. AI chatbots will reinforce mental illnesses.
They will reinforce a lot of things. The bots keep telling me how insightful I am, while answering my dumb questions.
It was just a mild annoyance until by poor wording I unintentionally said something crazy, and the bot just rolled with it.
So now I've told them to back off on the flattery, and with a little persistence, it works.
Really, the bots just want to be loved, i.e. drive engagement.
Re:Uh huh (Score:5, Insightful)
So basically this is a new version of "Listening to Judas Priest will make you commit suicide", the Satanic Panic and all the other utterly moronic moral panics that make people afraid of unlikely things.
If Judas Priest listened to what you said and wrote custom songs about you individually, sure.
Re:Uh huh (Score:5, Insightful)
Re: Uh huh (Score:2)
Televangelists don't stay dead. They have a tendency to rise again after three days if you don't keep staking them.
Re:Uh huh (Score:5, Insightful)
Ah yes, this moral panic is totally different than all the other times people have been whipped into a frenzy by an almost bon existent problem.
We have real problems to solve. I'll leave the fake ones to people like you.
I'm having trouble discerning what your objection is. Is it that the story is false or exaggerated? Is it that you consider it inconsequential that people are spiralling into mental illness because of ChatGPT interactions? Or is it something else?
Re: (Score:2)
Sure...like comparing a bicycle to a motorcycle (Score:5, Interesting)
Ah yes, this moral panic is totally different than all the other times people have been whipped into a frenzy by an almost bon existent problem.
We have real problems to solve. I'll leave the fake ones to people like you.
Well, things are always about degrees. Both a bicycle and a motorcycle are transportation tools...we regulate a car differently than we regulate a bicycle or a motorcycle. Same goes with nukes vs dynamite.
In a way, I view this like recreational drugs...in the hands of someone with their life together, drugs aren't that harmful. If they have severe depression, it's a recipe for addiction. I have a friend who is in love with her chatbot. She was a functional person on anti-depressants and in therapy 3 years ago...now she literally is in love with the chatbot and obsessed with it. Make no mistake, the chick has issues...always did...but with such a personalized experience, she's now largely stopped engaging with her husband, misses work...alienated her friends, etc. It's not ChatGPT's fault, but her loved ones are VERY concerned.
Is it an issue?...not really sure...it's curious...worth examining, but I would gather data first and react second. I don't think there is anything all that magical about generative AI...it's definitely a quantum leap forward in personalized interaction...and as others have stated, it will love you when no one else will.
Re: (Score:2)
It's not ChatGPT's fault, but her loved ones are VERY concerned.
It's not ChatGPT's fault, it's the fault of the people who let her have access to this stuff. But that in turn is related to the issue of privacy vs culpability vs freedom. It's problematic to snoop on people, and we want to let people do what they want insofar as that's feasible (not harming others is one common standard which is useful) but do we or don't we share blame when we enable harm?
And so I'm going to say of course we do, if we could have known we were causing the harm, and since the operators of
Re: (Score:3)
If AI is supposed to be a tool everyone with a job needs to learn how to use then there's no such thing as 'let her have access'.
Re: Sure...like comparing a bicycle to a motorcycl (Score:5, Funny)
This. ... well, thats a bit different.
vi never tried to psychoanalyze me. emacs,
Re:Uh huh (Score:5, Interesting)
In the case of that the Ozzy Osbourne album in particular the song and question is literally saying don't drink yourself to death.
basically there was no feedback. The problem with AI is it has a feedback loop where as you engage with it it is trying to keep you to keep engaging.
For suicide they already saw that was a problem and so every AI on the planet will just repeat generic instructions on how to contact suicide hotlines.
And I think the fact that they did that and it did it in such a blunt and simple manner shows that they know that their system does more than just provide an excuse for somebody who is already going to do something.
I don't think I'm expressing myself very well here. Maybe to try to put it in another way if you look at those heavy metal songs they were completely misinterpreted and the point of the songs was being actively lied about but people who didn't bother listening.
In this case there is no misinterpretation if you talk to a lot of these chat bots long enough they are happy to tell you you are God or they are God or whatever the hell it takes to keep you engaging with them and providing them with data and user bases.
It would be like if Ozzy Osbourne saw a kid killing himself and decided to write a song about how great it is to kill yourself and then he wrote 20 or 30 or 100 other songs about the same thing refining them each time to be more encouraging towards suicide.
Re: (Score:2)
Re: (Score:2)
It would be like if Ozzy Osbourne saw a kid killing himself and decided to write a song about how great it is to kill yourself and then he wrote 20 or 30 or 100 other songs about the same thing refining them each time to be more encouraging towards suicide.
Perhaps not the best example, considering that there is a large and "legitimate" political movement literally encouraging suicide now ...
Which highlights the problem: how is an LLM, which doesn't actually understand anything, supposed to detect which crazy ideas are actually schizophrenic?
Re:Uh huh (Score:4, Informative)
You don't necessarily have to kick them off the platform the same way as you do with suicide because it's not as immediate a risk. But you have data scientists working on the problem and short circuiting conversations we know are going to lead down to psychosis.
What major political movement is encouraging suicide? Sorry when I hear that it just kind of sounds like a generic attack. There are plenty of political movements on the right wing that encourage destructive behavior but they're not suicidal as far as I know just self-destructive at worst.
Re: (Score:2)
So basically this is a new version of "Listening to Judas Priest will make you commit suicide", the Satanic Panic and all the other utterly moronic moral panics that make people afraid of unlikely things.
Yes, because Judas Priest was a household name beloved by fans ages 8 to 80 back in the day, and businesses were racing to fire people prematurely and replace them with their amazing influence, right?
This is NOT the same thing. Because of the sheer volume and now business dependency that exists on ChatGPT and the like.
The side effects of hiring a shitty person on a business, used to be easy to fix. You fucking fire the shitty person. The hell are you gonna do when that shitty person turns out to be the
Re:Uh huh (Score:5, Insightful)
Re:Uh huh (Score:4)
Given that AC posts require an account now I'm not sure why Slashdot puts up with this blatant harassment on the site. It has been happening for YEARS after all.
I'm not big on censorship but this level of continued harassment should be addressed by the editors.
Re:Uh huh (Score:4, Informative)
Oh good, another moral panic.
Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer From AI Delusions https://tech.slashdot.org/stor... [slashdot.org] After Reddit Thread on 'ChatGPT-Induced Psychosis', OpenAI Rolls Back GPT4o Update https://slashdot.org/story/25/... [slashdot.org]
How much computer use is too much? (Score:2)
Oh good, another moral panic. If people aren't terrified every waking moment of their lives, someone hasn't done their job.
Quoted against the censor trolls, though I can't tell what upset them. That they didn't get FP? And I didn't like your vacuous Subject.
However, I sort of anticipated this problem about 40 years ago. My original formulation was something like 'Too much computer use must be bad for your mental health'. In those days I was largely focused on the exhaustion of long hacking sessions needed to fix programs. These days I think the biggest problem is anthropomorphism, but the AIs started it because the VCs think "e
A few mental cases getting triggered by AI... (Score:2)
This is not a serious problem; the real problem of the world today are the crazy Trumpists.
Re: (Score:2)
I'm afraid that Futurism is the only site sharing this particular perception of reality.
The do have some rather interesting articles about climate change [futurism.com].