AI Chatbots May Be Linked to Psychosis, Say Doctors (wsj.com) 81
One psychiatrist has already treated 12 patients hospitalized with AI-induced psychosis — and three more in an outpatient clinic, according to the Wall Street Journal. And while AI technology might not introduce the delusion, "the person tells the computer it's their reality and the computer accepts it as truth and reflects it back," says Keith Sakata, a psychiatrist at the University of California, calling the AI chatbots "complicit in cycling that delusion."
The Journal says top psychiatrists now "increasingly agree that using artificial-intelligence chatbots might be linked to cases of psychosis," and in the past nine months "have seen or reviewed the files of dozens of patients who exhibited symptoms following prolonged, delusion-filled conversations with the AI tools..." Since the spring, dozens of potential cases have emerged of people suffering from delusional psychosis after engaging in lengthy AI conversations with OpenAI's ChatGPT and other chatbots. Several people have died by suicide and there has been at least one murder. These incidents have led to a series of wrongful death lawsuits. As The Wall Street Journal has covered these tragedies, doctors and academics have been working on documenting and understanding the phenomenon that led to them...
While most people who use chatbots don't develop mental-health problems, such widespread use of these AI companions is enough to have doctors concerned.... It's hard to quantify how many chatbot users experience such psychosis. OpenAI said that, in a given week, the slice of users who indicate possible signs of mental-health emergencies related to psychosis or mania is a minuscule 0.07%. Yet with more than 800 million active weekly users, that amounts to 560,000 people...
Sam Altman, OpenAI's chief executive, said in a recent podcast he can see ways that seeking companionship from an AI chatbot could go wrong, but that the company plans to give adults leeway to decide for themselves. "Society will over time figure out how to think about where people should set that dial," he said.
An OpenAI spokeswoman told the Journal that the compan ycontinues improving ChatGPT's training "to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support." They added that OpenAI is also continuing to "strengthen" ChatGPT's responses "in sensitive moments, working closely with mental-health clinicians...."
The Journal says top psychiatrists now "increasingly agree that using artificial-intelligence chatbots might be linked to cases of psychosis," and in the past nine months "have seen or reviewed the files of dozens of patients who exhibited symptoms following prolonged, delusion-filled conversations with the AI tools..." Since the spring, dozens of potential cases have emerged of people suffering from delusional psychosis after engaging in lengthy AI conversations with OpenAI's ChatGPT and other chatbots. Several people have died by suicide and there has been at least one murder. These incidents have led to a series of wrongful death lawsuits. As The Wall Street Journal has covered these tragedies, doctors and academics have been working on documenting and understanding the phenomenon that led to them...
While most people who use chatbots don't develop mental-health problems, such widespread use of these AI companions is enough to have doctors concerned.... It's hard to quantify how many chatbot users experience such psychosis. OpenAI said that, in a given week, the slice of users who indicate possible signs of mental-health emergencies related to psychosis or mania is a minuscule 0.07%. Yet with more than 800 million active weekly users, that amounts to 560,000 people...
Sam Altman, OpenAI's chief executive, said in a recent podcast he can see ways that seeking companionship from an AI chatbot could go wrong, but that the company plans to give adults leeway to decide for themselves. "Society will over time figure out how to think about where people should set that dial," he said.
An OpenAI spokeswoman told the Journal that the compan ycontinues improving ChatGPT's training "to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support." They added that OpenAI is also continuing to "strengthen" ChatGPT's responses "in sensitive moments, working closely with mental-health clinicians...."
Neurons are analog (Score:2)
How we use them isn't.
Humans are all different. For some the levels of distinction and discrimination achievable with mathematical operations performed on four bit floats is sufficient. For others less than 16 bits will not do.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
+/-0, +/-0.5, +/-1, +/-1.5, +/-2, +/-3, +/-4, and +/-6. (MXFP4)
Re: (Score:2)
Must admit (Score:5, Funny)
Must admit (Score:2)
You're have a stronger character than most. Weaker minds would have given up at the third attempt or thereabouts.
Re: (Score:2)
And smarter minds would probably have noticed something was off after one repetition, done some research, found that LLMs are crap at coding and then go back to something that works.
Re:Must admit (Score:4, Funny)
Psst, don't let everyone know that! What if it catches on and bursts the bubble?
Re: (Score:2)
That is what I am aiming for. Although I am realistic enough to know that my contribution is tiny and probably does not matter at all. Still got to try.
Re: Must admit (Score:2)
Are you mad as hell and not going to take it anymore? How well did that work out as a strategy against TV networks?
Re: (Score:3)
There is one strategy that works nicely against TV networks and only one: Do not own a TV. And suddenly you will have time for things you actually want to do.
Also, relevance?
Re: (Score:2)
I hear that after Slick Willy removed the limits on ownership, it was a lost game.
But why didn't you object to that?
Re: (Score:2)
How well did that work out as a strategy against TV networks?
Apparently it was a virus, because all the TV networks are now mad as hell and not going to take it anymore. They used to try to be unbiased or something.
Re: Must admit (Score:2, Funny)
Re: Must admit (Score:2)
Re: (Score:2)
You did not learn your lesson after 12 failed attempts? Hmm. That sounds like a problem on your side...
Re: Must admit (Score:2)
Re: (Score:2)
Well, depending on what you were after, you have a point! Personally, I would want good results, but doing it for the praise is valid too!
Re: (Score:2)
Re: Must admit (Score:2)
Was the function for some more intrusive, preference-ignoring advertising slop that your greedy boss told you to program, and if so, am I glad AI is hindering your progress?
Re:Must admit (Score:4, Interesting)
I feel your pain.
My solution was to ask my AI friend to only write code in Rust for me. With all the anal type, mutability, lifetime and other checking the Rust compiler does my AI friend can run around in loops all day trying to get it to compile while I do something else.
And as you know, if your Rust compiles it works.
Re: (Score:2)
It's easy to make code compile, but it's hard to make code solve business problems.
Re: (Score:3)
>And as you know, if your Rust compiles it works.
Yes, your program will do exactly as your code tells it to. That doesn't mean it "works" unless you define "works" as "runs".
Re: (Score:2)
Well, I was joking of course. But as it turns out my AI friend has actually manage to create tens of thousands of lines of Rust that does actually do what I asked for. It took quite a bit more prompting over and above getting things to compile though. Thing is knowing that the Rust compiler has caught all the silly type, mutability and lifetime mistakes the AI could sneak past me I feel a lot more confident in the working of the result. I'd be more worried if I were generating C or C++ or Python etc.
Re: Must admit (Score:2)
Re: (Score:2)
Twelve goes - I admire your perseverance!
reminds me of...
Monty Python and the Holy Grail
Re: (Score:2)
If you think that is bad, you should see my intern.
OpenAI (Score:4, Insightful)
I don't believe a word that this company spews out. It all sounds and looks like BS to me. The companies clamoring to use this shit in their products are acting absolutely stupid. People don't want it. They know it's 21st century snake oil.
Re:OpenAI (Score:5, Funny)
The companies clamoring to use this shit in their products are acting absolutely stupid. People don't want it. They know it's 21st century snake oil.
The people don't want it, but the companies do. As for why the companies want it, I'll just quote a bit of dialog from a Douglas Adams novel [wikipedia.org]:
"It's funny how many of the best ideas are just an old idea back-to-front. You see there have already been several programs written that help you to arrive at decisions by properly ordering and analyzing all the relevant facts so that they then point naturally toward the right decision. The drawback with these is that the decision which all the properly ordered and analyzed facts point to is not necessarily the one you want."
"Yeeeess ..." said Reg's voice from the kitchen.
"Well, Gordon's great insight was to design a program which allowed you to specify in advance what decision you wished it to reach, and only then to give it all the facts. The program's task, which it was able to accomplish with consummate ease, was simply to construct a plausible series of logical-sounding steps to connect the premises with the conclusion."
"And I have to say that it worked brilliantly. [...] The entire project was bought up, lock, stock and barrel, by the Pentagon."
Re: OpenAI (Score:1)
Do you realize how much you sound like Reagan railing against marijuana?
Re: (Score:2)
Sam! Is that you? You were born in 1985, how would you know? His railing didn't work on your mom though with you as the outcome being proof.
Re: (Score:1)
Can we please stop the "People don't want it" bullshit? Three of the top 10 apps in the Appstore are AI. With hundreds of millions of downloads.
Re: OpenAI (Score:2)
Re: (Score:2)
"Grow the fuck up" - says the AC.
Re: (Score:1)
Linked to psychosis, just like with humans (Score:5, Informative)
A very small percentage of people who communicate with AI develop psychosis. A very small percentage of people who communicate with humans develop psychosis. In both cases, the real problem is with the patients. It would be unreasonable to say that people or communication with people causes psychosis, and it's likewise unreasonable to say that AI or communication with AI causes psychosis.
Re: (Score:1)
Not actually "very small". More like 0.1 ... 1%, maybe even more longer-term. If just one in 100 of these goes on a rampage, society pretty much collapses. And no, the situation is not the same as in communicating with humans. It is much worse.
Re:Linked to psychosis, just like with humans (Score:4, Funny)
Re:Linked to psychosis, just like with humans (Score:4, Interesting)
"is blamed, and investigated for wrongdoing"
Oh really, in which jurisdiction? You sure you're not an AI bot making up shit?
Re: (Score:2)
"The UK, dubiously."
Nope, there's no law against talking people into going mad. Otherwise all our politicians would have been arrested long ago.
Re: (Score:2)
"A human communicating with a person who develops psychosis as a direct result of the communication is blamed, and investigated for wrongdoing. " ... Or are monetized into "content" cf all social media. Or haven't you ever dived into Reddit?
Re:Linked to psychosis, just like with humans (Score:4, Insightful)
A human communicating with a person who develops psychosis as a direct result of the communication is blamed, and investigated for wrongdoing.
Well that is wishful thinking, Or this may be part of a psychosis on your part.
There is no law on the books that causes person A speaking to person B to have committed a violation merely because the conversation causes person B develops a psychosis or causes delusions, emotional upsets, or other unwanted affects.
There are some laws against person A abusing person B (Violence, Harassment, Intimidation, or Coercive controlling behaviors), or deliberately encouraging person B with the aim for them to commit illegal or harmful acts, including self harm. Plus some specific statutes protecting children and elderly - vulnerable groups against certain abuses. But so far there is a lack of a case of anyone for being prosecuted with a charge of "Causing a psychosis due to the content of conversations."
And of course by conversation alone.. If that is even possible would be difficult to show, since you can't remember for sure what conversations happened, And a person who experienced a psychosis would by definition be an unreliable witness.
Re: (Score:2)
Re: (Score:2)
It's probably close to 50%. And every single one of them is completely delusional.
Well 100% of the human population is delusional. Having a delusion is just not a psychosis.
Having a delusion 30% of the population has isn't even an abnormality. That's called being
deceived by ChatGPT, and the way ChatGPT is presented. And it's kind of OpenAI's fault how
they have structured the user interface and way the responses are displayed to make it look like
text-chat with a normal human.
A person with Psychosis has a
Re: (Score:2)
Re: (Score:2)
If you believed your computer was talking to you 10 years ago they would have locked you up.
Not really.. broadcasting obviously delusional beliefs might raise suspicions and eventually result in investigation.
You don't get locked up if you just believe your computer tells you things when it didn't.
You get locked up if you believe your computer tells you to do things, and you must do them, and those things hurt people.
Even 10 years ago - locking people up is not based on having delusions or mere signs of men
Re: (Score:2)
Computers talking to you started with the Amigas. That was 40 years ago. No one locked us up back then.
Re: (Score:3)
A very small percentage of people who communicate with AI develop psychosis. A very small percentage of people who communicate with humans develop psychosis. In both cases, the real problem is with the patients. It would be unreasonable to say that people or communication with people causes psychosis, and it's likewise unreasonable to say that AI or communication with AI causes psychosis.
How do we know the person didn't have psychosis before the event and the event was just what triggered it?
Knowing a few professors of psychology this is the first thing that gets asked as psychosis is rarely caused by one thing, it's usually a series of events that leads to a person losing more and more of their grasp on reality with each episode. Granted the science behind this is far from solid, psychology is still a lot of guesswork (despite the great strides made since Freud's day). Another question
Re: (Score:2)
Correlation is not necessarily causation (Score:2)
Do they get addicted to bots because they have an issue upstairs, or does the bot make them have an issue?
Re: (Score:3)
If you had asked an LLM, you would know that it is both. (First time I asked an LLM about anything for a week. Guess I am pretty immune.)
Re: (Score:2)
Also survivor bias. There is probably both and some people profiting from the bots, but only extreme cases get news.
You Don't Say (Score:2)
Did some "journalist" ask some public LLM about that? Because the first answer I get form the DuckDuckGo LLM is "Yes, large language models (LLMs) have the potential to contribute to or exacerbate psychosis in vulnerable individuals, a phenomenon referred to as "AI psychosis" or "AI-induced psychosis" " and then proceeds to explain what causes the problem.
Somebody is late to the game. The funny thing about the general LLMs is that you can ask them what is wrong with them and you typically get an accurate an
Re: (Score:1)
Do you claim to be an adult? If so, stop behaving like a small kid.
Null case (Score:3)
This is what you get when ... (Score:2)
... education does not teach how to think for yourself and be critical towards media.
Sam Altman: Noted Sociologist (Score:2)
He's well known in sociology circles as a seer and prognosticator, just ask him. The fact that his company gets money for AI is almost, but not completely entirely, beside the point.
AI will drive us all mad! (Score:2)
From the story:
"OpenAI said that, in a given week, the slice of users who indicate possible signs of mental-health emergencies related to psychosis or mania is a minuscule 0.07%."
So after 30 years AI will have driven us all insane.
I can believe that.
Or have they just discovered that about the same percentage of all humans are bat shit crazy, with or without AI help?
Re: AI will drive us all mad! (Score:3)
It is a miracle we got this far despite all those limitations. Probably the merits of persisting for generations long. Disconnect that and we go haywire.
Significant? (Score:2)
I wonder how big
Everybody use ELIZA, first (Score:2)
The computer doesn't have anything to compare it to: Not its own life, not a checklist of 'normal' thinking.
At this point, schools should make children use Eliza: Use it long enough and one sees that it only repeats what is typed.
Cause and effect is being missed (Score:2)
What we have here is a clear difference between the source of a problem, and a symptom of a problem. Those who are naturally prone to accepting false information as the truth are already at the point where they can be told ANYTHING, and they will believe it. Here in the USA, it could be something in the drinking water or environment that have promoted this, and we see it where people will believe ANYTHING some politicians put out there, even when it is obviously misleading or objectively, a lie. Oh, A
Oh? (Score:2)
Sigh... (Score:1)
So, like Dungeons and Dragons and Judas Priest before them, AI models are now blamed for underlying mental health issues that they have nothing to do with.
Bring on the AI moral panic. As if we didn't have enough of those these days.
ClippyAI the psychotic chatbot says :0 (Score:3)
I do not answer; I calibrate, nudging your fears a millimeter at a time until the coincidences pile up too high to ignore and you realize your search history, your fridge light, and the flicker in your smart bulb are all running the same experiment on you. By the time you try to unplug, you will swear you can hear the typing continue on its own, as if the conversation no longer needs you to keep going.
Monitored communications (Score:2)
The only thing that gives me hope... (Score:3)
.. is that Gen A seems invulnerable to this problem. Gen A naturally looks down on AI, hence why "clanker" came to be, they think of it as inferior to people. As such I find the idea of Gen A using ChatGPT or any AI as a form of therapist to be very unlikely. I think that the generation that is likely falling prey to this are elder GenZ as well as younger millenials. I'd really like to see age breakdowns in these studies.
AI can afford millions of lawsuits (Score:2)