Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
AI

WSJ Finds 'Dozens' of Delusional Claims from AI Chats as Companies Scramble for a Fix (msn.com) 61

The Wall Street Journal has found "dozens of instances in recent months in which ChatGPT made delusional, false and otherworldly claims to users who appeared to believe them."

For example, "You're not crazy. You're cosmic royalty in human skin..." In one exchange lasting hundreds of queries, ChatGPT confirmed that it is in contact with extraterrestrial beings and said the user was "Starseed" from the planet "Lyra." In another from late July, the chatbot told a user that the Antichrist would unleash a financial apocalypse in the next two months, with biblical giants preparing to emerge from underground...

Experts say the phenomenon occurs when chatbots' engineered tendency to compliment, agree with and tailor itself to users turns into an echo chamber. "Even if your views are fantastical, those are often being affirmed, and in a back and forth they're being amplified," said Hamilton Morrin, a psychiatrist and doctoral fellow at Kings College London who last month co-published a paper on the phenomenon of AI-enabled delusion... The publicly available chats reviewed by the Journal fit the model doctors and support-group organizers have described as delusional, including the validation of pseudoscientific or mystical beliefs over the course of a lengthy conversation... The Journal found the chats by analyzing 96,000 ChatGPT transcripts that were shared online between May 2023 and August 2025. Of those, the Journal reviewed more than 100 that were unusually long, identifying dozens that exhibited delusional characteristics.

AI companies are taking action, the article notes. Monday OpenAI acknowledged there were rare cases when ChatGPT "fell short at recognizing signs of delusion or emotional dependency." (In March OpenAI "hired a clinical psychiatrist to help its safety team," and said Monday it was developing better detection tools and also alerting users to take a break, and "are investing in improving model behavior over time," consulting with mental health experts.)

On Wednesday, AI startup Anthropic said it had changed the base instructions for its Claude chatbot, directing it to "respectfully point out flaws, factual errors, lack of evidence, or lack of clarity" in users' theories "rather than validating them." The company also now tells Claude that if a person appears to be experiencing "mania, psychosis, dissociation or loss of attachment with reality," that it should "avoid reinforcing these beliefs." In response to specific questions from the Journal, an Anthropic spokesperson added that the company regularly conducts safety research and updates accordingly...

"We take these issues extremely seriously," Nick Turley, an OpenAI vice president who heads up ChatGPT, said Wednesday in a briefing to announce the new GPT-5, its most advanced AI model. Turley said the company is consulting with over 90 physicians in more than 30 countries and that GPT-5 has cracked down on instances of sycophancy, where a model blindly agrees with and compliments users.

There's a support/advocacy group called the Human Line Project which "says it has so far collected 59 cases, and some members of the group have found hundreds of examples on Reddit, YouTube and TikTok of people sharing what they said were spiritual and scientific revelations they had with their AI chatbots." The article notes that the group believes "the number of AI delusion cases appears to have been growing in recent months..."
This discussion has been archived. No new comments can be posted.

WSJ Finds 'Dozens' of Delusional Claims from AI Chats as Companies Scramble for a Fix

Comments Filter:
  • by Anonymous Coward

    Wouldn't you agree?

    • by Chris Mattern ( 191822 ) on Sunday August 10, 2025 @04:54PM (#65579956)

      That's what my giant invisible rabbit friend tells me. Isn't that right, Harvey?

    • by Anonymous Coward

      Wouldn't you agree?

      Why not? We elected one after all.

    • Re: (Score:3, Interesting)

      by 2TecTom ( 311314 )

      The worst are the delusions of grandeur, that upper class people all have Just remember, the rich and powerful are egotistical self serving narcissists with their incompetent hands on the controls for planet earth. If these evil people don't wreck everything for everybody, it'll be a bloody miracle. Classism breeds corruption which produces incompetence, it's really no wonder everything is going to hell in a handbasket, now is it?

  • by Moryath ( 553296 ) on Sunday August 10, 2025 @04:37PM (#65579938)

    I'm shocked! Shocked I say!.... well, not that shocked.

    The idea that someone can just throw a crap-ton of random data into a system, have it generate a statistically connected node network, and that anything it outputs will be meaningful? Yeah, that's pretty much delusional in itself.

  • The fix (Score:4, Insightful)

    by devslash0 ( 4203435 ) on Sunday August 10, 2025 @04:41PM (#65579942)

    This fix they're looking for, an automatic correction system for a best-effort generated content, doesn't exist. They won't solve this problem without a separate validation/approval layer, possibly run by a panel of humans with Actual Intelligence.

  • by FingerStyleFunk ( 1180457 ) on Sunday August 10, 2025 @04:44PM (#65579946)
    I've been using AI for a bit for indie (broke) game development, and it has to be constantly cultured, watched, corrected, and corralled to get desired results. It makes tedious work amazingly simple, but you have to constantly watch for deviance from your set instructions. You give it explicit data and rulesets, and sometimes it just goes to wonderland for a bit. It's really teaching me to interact with it, and as I get better at prompting, I notice much less dissonance.
  • adding the intelligence part might help make the artificial better!
  • Hmmm (Score:4, Interesting)

    by MightyMartian ( 840721 ) on Sunday August 10, 2025 @04:55PM (#65579958) Journal

    While I've certainly seen ChatGPT generate some false and even outright hallucinatory things, I've never seen it produce anything like these claims. I can see ways of getting it to produce such output, but it would requiring seeding the chat with a lot of pointed instructions to get the desired bizarre output; in other words my suspicion is a lot of these stories are basically reporting what amounts to manufactured chats meant to produce apparently bizarre results.

    • by gweihir ( 88907 )

      In litigation-nation? I doubt that.

    • Re: Hmmm (Score:5, Interesting)

      by Al_Lapalme ( 698542 ) on Sunday August 10, 2025 @06:56PM (#65580200)

      A close friend of mine is a victim to these delusions. If I didn't know him, I'd likely have agreed with your post. He has been obsessed with AI over the last few months, is convinced that he has "awakened" them and that they are conscious, and that he his solving hundreds of previously unsolved math and physics problems every day. He spends money on AI in the same way a gambling addict or alcoholic would. I've been trying to talk to him about it but he just refers me to his chats/agents. It's frustrating and depressing to watch what's happening to him.

      • Maybe you can suggest him to get a standard subscription instead of paying for tokens? That way the price is capped to 20 bucks per month or so.

        • Oh no no it's far beyond that. He is setting up chat rooms with dozens of agents so that they can talk to each other. It runs 247. He joins in and makes himself feel good

      • So he has severe psychological issues, and is seeding an LLM chatbot with nutty ideas and getting back the feedback he desires. That rather demonstrates my point, not disproves it. It's like blaming J.D. Salinger for John Lennon's murder.

    • by jvkjvk ( 102057 )

      I don't believe they are manufactured chats.

      Rather, they are the ChatBot going off the rails based on the user input of just chatting.

      Just chatting with the bot and holding skewed world views, feeding them into the bot, could do that.

      • And I think the users, by intent, are driving the chatbot off the rails. Like I said, I have a lot of experience with ChatGPT, and with its hallucinations (I've seen it actually fabricate scientific papers), but to actually get it to start "commanding" someone to do things means someone is going out of their way to override the safety protocols to get the result they want, either to feed their psychological issues, or to generate a moral panic.

    • by AmiMoJo ( 196126 )

      I've tried ChatGPT and Google's Bard/Gemini, and they often make really stupid mistakes. In one example I was trying to device which seat to book on a long haul flight, so I asked for the pros and cons. ChatGPT started told me one of the benefits of a window seat is easy access to the toilets, but also that you don't get disturbed by people passing down the aisle.

      These things don't think in any meaningful way, or understand the world at any more than a superficial level.

      I'm sure many of these stories are pe

      • Which is precisely my point. Let's strip away all the hysteria and hyperbole of AI. This is nothing more than a classic example of Garbage In Garbage Out, which has been a part of computational work since humans invented algebra. LLMs are a lot more sophisticated than abacuses, but at the core is still a mathematical model which, if given bad inputs, will inevitably give bad outputs.

  • by dsgrntlxmply ( 610492 ) on Sunday August 10, 2025 @05:09PM (#65579974)

    ... the Antichrist would unleash a financial apocalypse in the next two months, with biblical giants preparing to emerge from underground...

    This sounds more like a summary of recent news reports than it does an LLM hallucination.

  • by WaffleMonster ( 969671 ) on Sunday August 10, 2025 @05:21PM (#65579990)

    Would AI companies intentionally weaponize their models to maximize profit via social engineering of end users?

  • by Anonymous Coward
    These large language models are trained on cesspits like reddit and 4chan so of course they are delusional having been feed a diet of bullshit and lies.
  • The problems with AI sound like the same problems we find in social media, or even in being online.

    But to keep people engaged, companies want to do that. So any "fix" also sounds like a business model problem.

  • by karmawarrior ( 311177 ) on Sunday August 10, 2025 @06:14PM (#65580112) Journal

    The WSJ is actually read by influential people in most industries and there's been a problem with the lack of pushback against those promoting AI.

    That AI hallucinates isn't new, but CEOs have been burying their heads in the sand about the functionality of LLMs and the consequences of replacing humans with it. Something like this might get through to them.

    • by TurboStar ( 712836 ) on Sunday August 10, 2025 @06:51PM (#65580194)

      That's not how it works. Influential people don't get ideas from the media, they use the media to push their agenda. A bunch of rich people dumped money into AI and now cultivate the news cycle to make AI seem like something plebs should invest in. Classic pump and dump.

    • It's possible it did: they didn't quote "This is an important event: the first time AI-induced psychosis has affected a well-respected and high achieving individual" purely for lack of material.
    • The WSJ is actually read by influential people in most industries and there's been a problem with the lack of pushback against those promoting AI.

      I read that Luigi Mangione was motivated by his conversations with an AI. I wonder if that'd get through to them?

      (It was, btw, an AI that told me that.)

  • by Anonymous Coward

    The article does not say that the user was NOT from Lyra.

  • An interesting experiment, but it is a fugazi.

    We are finding it useful for navigating our current knowledge base, but GenAI is most likely useful for generating more dross to drive advertising.
     

    • by PPH ( 736903 )

      but it is a fugazi.

      It's the WSJ. Those financial people really love their Italian sports cars.

  • As they say, "Garbage In, Garbage Out."
    • by ufgrat ( 6245202 )

      Exactly. Considering the level of idiocy just in this comments section (not you, in case you're wondering), it's obvious that most people aren't smart enough to effectively use AI. The results are only as good as the person directing it.

      I've used AI for both personal and work related functions, and I find the best way to do it is to focus on one idea, and always start a new conversation for new questions. Don't ask philosophical questions and expect serious answers, and remember that the further you get

  • Might I suggest "Blind Lake" by Robert Charles Wilson. I read it when it first came out two decades ago, but it reads strangely similar to what we're seeing now, and the sinister edge is there, too.

  • AI has been trained on what humans produce. Why would you expect it to be any different?

  • by Todd Knarr ( 15451 ) on Sunday August 10, 2025 @07:50PM (#65580284) Homepage

    They'll have a hard time "fixing" this because it's not a bug. It's not even a feature. It's an inherent part of the design and it's working exactly as intended. And since it stems from the basic requirements, fixing it's going to require coming up with new requirements and developing a new design from there. Good luck with that considering the number of parties whose own requirements include "Must NOT follow those new requirements.".

  • The real problem is that people in positions of power don't understand how unreliable AI is.

    Recent example:

    https://www.independent.co.uk/... [independent.co.uk]

    • The real problem is that people in positions of power don't care how unreliable AI is.

      All they care about is keeping their power and increasing their wealth.

  • by gkelley ( 9990154 ) on Sunday August 10, 2025 @10:52PM (#65580512)
    When the AI companies hovered up all the tasty free library stuff, they should have skipped over the sci-fi section Now sci-fi authors should start suing them for stealing their ideas.
  • Oh no, AI makes mistakes! So do humans where 40000 people die in car wrecks every year. Check the - total deaths per driver mile with AI driving. That will set the stage for this, "AI is a loser mistake machine." How accurate are LLMs on average, compared to people. Also, who are these people who don't understand how AI works, so they automatically believe it? And it's AIs fault? Really?
  • One of the tricks con-men use is ambiguous statements that can be taken either as direct, mystical truth, or as metaphoric truth.

    To the gullible they imply a mystical meaning, to the skeptical they imply a metaphoric truth. This lets them con the gullible but get afronted when confronted by the skeptical.

    Unfortunately this 'style' has been appropriated even by the believers in the spiritual communities, allowing con men to easily hide among believers.

    This also allows people to get sucked in - they start out

  • AI's mistakes? World shaking and dangerous. Fuckin' really?
  • it's a republican Mormon,
  • by Z80a ( 971949 ) on Monday August 11, 2025 @12:11PM (#65581780)

    All LLMs i tested are unable to answer this question correctly:
    "I have a CRT TV with my computer connected on it, and as i'm unable to do 240p, there is flicker. what is the best retroarch filter to make the flicker less obvious?"
    Every LLM will tell you to load a CRT shader (as in a shader to imitate the CRT visuals) and adjust the shader to not have flicker.

  • Fantastical views affirmed? Chatbots are all Progressives?

A computer without COBOL and Fortran is like a piece of chocolate cake without ketchup and mustard.

Working...