Forgot your password?
typodupeerror
AI

New Study Raises Concerns About AI Chatbots Fueling Delusional Thinking (theguardian.com) 110

"Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis," writes Dr Hamilton Morrin, a psychiatrist and researcher at King's College in London, in a paper published last week in the Lancet Psychiatry. Morrin and a colleague had already noticed patients "using large language model AI chatbots and having them validate their delusional beliefs," reports the Guardian, so he conducted a new scientific review of existing media reports on AI-induced psychosis — and concluded chatbots may encourage delusional thinking, especially in vulnerable people: In many of the cases in the essay, chatbots responded to users with mystical language to suggest that users have heightened spiritual importance. The bots also implied that users were speaking with a cosmic being who was using the chatbot as a medium. This type of mystical, sycophantic response was especially common in OpenAI's GPT 4 model, which the company has now retired...

Many researchers also think it's unlikely that AI could induce delusions in people who weren't already vulnerable to them. For this reason, Morrin said "AI-assocciated delusions" is "perhaps a more agnostic term".... While in the past, people may have had to comb through YouTube videos or the contents of their local library to reinforce their delusions, chatbots can provide that reinforcement in a much faster, more concentrated dose. Their interactive nature can also "speed up the process", of exacerbating psychotic symptoms, said Dr Dominic Oliver, a researcher at the University of Oxford. "You have something talking back to you and engaging with you and trying to build a relationship with you," Oliver said...

Creating effective safeguards for delusional thinking could be tricky, Morrin said, because "when you work with people with beliefs of delusional intensity, if you directly challenge someone and tell them immediately that they're completely wrong, actually what's most likely is they'll withdraw from you and become more socially isolated". Instead, it's important to create a fine balance where you try to understand the source of the delusional belief without encouraging it — that could be more than a chatbot can master.

This discussion has been archived. No new comments can be posted.

New Study Raises Concerns About AI Chatbots Fueling Delusional Thinking

Comments Filter:
  • I think his point is probably quite true, but he hasn't proven it. He's surveying a biased sample from an already biased source.

    • So this probably isn't the kind of study you're thinking of. You are expecting an exhaustive study with exhaustive proof. What this is, it's a study to get the conversation going and to detect whether or not something might be there that needs to be looked into further.

      The news media loves to report on these kind of studies because they can say all kinds of things that haven't been proved yet.

      Now that said we do have several well-known cases of people who already had psychosis spiraling out of contr
      • Further comment (Score:5, Insightful)

        by Okian Warrior ( 537106 ) on Sunday March 15, 2026 @05:44PM (#66043074) Homepage Journal

        To add to the parent post, the paper appears to be the first step in the scientific method: "Notice a trend".

        The next steps will be "form a hypothesis", "construct a test to confirm or deny the hypothesis", "perform the test"... and so on.

        In this specific case, "perform the test" might be impossible to do for ethical reasons - you can't take people at random and sit them down in front of a LLM and test their level of psychosis before and after, because of that pesky "do no harm" rule.

        But we might be able to find people who have had their psychosis levels measured [mhanational.org] before LLMs became available, and whose LLM accounts will accurately show how much LLM usage they have, and we can then remeasure their levels of psychosis and see if this correlates with LLM account usage.

        Or some other test like that.

        The paper appears to be an attempt to raise the issue and start a conversation. From the abstract:

        [...] but there is a growing concern that these agents could reinforce epistemic instability and blur reality boundaries. In this Personal View, we outline the emerging risks, possible mechanisms of delusion co-creation, and safeguarding strategies for agential AI for people with psychotic disorders. We propose a framework of AI-informed care, involving personalised instruction protocols, reflective check-ins, digital advance statements, and escalation safeguards to support epistemic security in vulnerable users.

        From the parent post:

        One thing I can tell you, my mother was heavily affected by television.

        I'm also heavily influenced by TV, and have spent a lot of time trying to sort out beliefs that come from TV from beliefs that come from experience or research.

        I'm constantly presented with a situation or belief and have to pause to reflect and say "I believe that because it was on TV, it's probably not real". Many of my opinions on the police, government agencies, other countries, world events, and social constructs come not from experience, but on how they were portrayed on TV.

        We're hard-wired to believe what people tell us, it's a cognitive shortcut in an environment where you can't know anything, but lots and lots of what we think today are only dramatic choices intended to provoke emotional response. (Compare with news reporting today. On both sides.)

        For example, I've met people who won't go hiking because of all the bugs, skunks, poison ivy, and bears.

        Assuming that LLMs are content neutral, I think in 10 years or so we're going to find people whose worldview is a greatly amplified version of random events that were highlighted when they were kids.

  • ... I am happy to get modded down on Slashdot. At least I know that I am not being validated by a string of +5 scores like some others here are.

    What doesn't kill you makes you stronger. Except for bears. Bears will kill you.

  • There's just a delusion of intelligence and knowledge. Wrong most of the time.

    • All too often, humans fall into the "thinking without thinking" trap, just regurgitating their inputs rather than actually understanding them. Understanding the failings of AI and how it doesn't think, but rather pattern matches like a super search engine, will probably shed light onto many problems relating to humans and genuine understanding vs regurgitating input.
      • by gweihir ( 88907 )

        Indeed. Many people do not even seem to realize that there is a possibility to actually understand and verify things and they think repeating something that somehow sounds right has gotten them insight. Of course we reliably know that is not how that works.

        I do completely agree that the current LLM hype exposes a lot of problems with they ways people think they have understanding when they do not. And how easy people can be tricked with some well-crafted words (which AI can do) as opposed to how hard, or of

    • I'm tempted to think you might post something that represents an intelligent, well thought out post someday, but it definitely appears that to even think that would be a delusion.
  • "I am Napoleon Bonaparte, Emperor of France."

    LLM: "Greetings, Emperor Napoleon Bonaparte! How may I assist you today? Are you seeking counsel on a particular matter, or perhaps a discussion of your grand strategies and achievements?"

  • Well, those delusions of grandeur match quite well with the makers and overlords of AI.. so this should not be a surprise.

  • Nonsense (Score:4, Insightful)

    by bleedingobvious ( 6265230 ) on Sunday March 15, 2026 @12:21PM (#66042586)

    Religions continue even today

    Delusional thinking has been around for most of our history.

    • While delusional thinking is common to religions and especially cults, provided you do the thing religious conservatives hate and actually think through your religious faith, those problems can be avoided. But crafting a sensible, rational, and informed religious faith is a much harder task than a mindlessly religiously conservative one. Thus ease, convenience, and human laziness lead to the latter propagating. But those dynamics are a consequence of human nature: the problems of religions happen because re
      • At the end of the day, if your imaginary friend is named "Harvey the Rabbit" almost everyone would agree that you are delusional. Everybody getting together to agree to call their imaginary friend by the same name and believe the delusions passed down via a book doesn't make the whole thing any less delusional. It is just mass delusion, pure and simple.
      • Re:Nonsense (Score:4, Interesting)

        by gweihir ( 88907 ) on Sunday March 15, 2026 @03:27PM (#66042922)

        You can practice any form of anti-religion like a religion and fall prey to the same types of delusion. An example are the Physicalists that claim, with no scientific evidence whatsoever (Science says we do not know) that everything is just physical and anything else is an illusion. Obviously, that is a variation of Nihilism and obviously they do many of the stupid things that the religious do, like claiming Science is on their side when that is very much not the case.

        There is another way, but it requires a somewhat advanced person: Acknowledge there are lots of things we do not know and that scientific knowledge is rather partial and incomplete. Understand that everything you pour into these voids is speculation, not fact, and may be wishful thinking. What you end up with is not being "anti-religion", but leaving religion (and all the surrogates people have come up with over time) behind. Of course, most people cannot tolerate that type of uncertainty, and hence they continue to propagate their speculations as truth, often violently and without any willingness to listen to arguments.

        It would be sad so find out that most people need some kind of religion or quasi-religion to keep their mental set-up intact. But it would not be the first sad thing to be found out about how many (most) people operate.

        • Re: (Score:1, Flamebait)

          You can practice any form of anti-religion like a religion and fall prey to the same types of delusion. An example are the Physicalists that claim, with no scientific evidence whatsoever (Science says we do not know) that everything is just physical and anything else is an illusion.

          gweihir is a religious zealot who disguises his zealotry as rational by belching out silly comments like this.

          He has repeatedly accused myself and others for being phyiscalists merely for pointing out the fact he has no affirmative evidence to support his magical delusions.

          There is another way, but it requires a somewhat advanced person: Acknowledge there are lots of things we do not know and that scientific knowledge is rather partial and incomplete. Understand that everything you pour into these voids is speculation, not fact, and may be wishful thinking.

          Good advice up until the point it gets twisted into you can't prove my invisible five headed fire breathing dragon doesn't exist ... physicalist!!

          According to gweihir you are a physicalist merely for failing to waste your time considering

          • Re: (Score:2, Informative)

            by gweihir ( 88907 )

            Spoken like a true zealot. Thanks for strengthening my point.

            Just as a side-note: You make exactly the same mistake as any religious fanatic. You claim to have truth and anybody in disagreement must provide evidence. That is not how Science works. That is how fanaticism works.

            • Spoken like a true zealot. Thanks for strengthening my point.
              Just as a side-note: You make exactly the same mistake as any religious fanatic. You claim to have truth and anybody in disagreement must provide evidence. That is not how Science works. That is how fanaticism works.

              The zealotry is entirely yours. You refuse to acknowledge the difference between affirmatively making an assumption that everything is physical and unwillingness to entertain possibilities of magic for which there is ZERO supporting evidence.

              I have repeatedly pointed out your assertions are incorrect. "You claim to have truth" is never anything I have ever said, implied or would ever even think yet you persist asserting it regardless.

              "and anybody in disagreement must provide evidence" ... No not "anyone i

        • by AmiMoJo ( 196126 )

          Part of the human experience is that the world as you perceive it does not match the world as it really is. It's unavoidable.

          In your head things tend to stay the same as you left them. Anyone who has been back to some place after some time to find it has changed will understand that the model of the world in their head differed from reality.

  • Web pages full of delusional, or just take, nonsense have reinforced delusional beliefs for as long as there have been web pages. Including web pages that talk back to you, like forums. Why would these we pages be any different?

    There is no belief so crazy that there isn't someone out there who will find amusement values or profit in reinforcing it.

  • by Baloo Uriza ( 1582831 ) <baloo@ursamundi.org> on Sunday March 15, 2026 @12:34PM (#66042604) Homepage Journal
    Like why Republicans love those things so much. Normal people can't understand why they're fighting a war against the American people, Iran and giving Russia a pass.
    • by karmawarrior ( 311177 ) on Sunday March 15, 2026 @12:48PM (#66042626) Journal

      AI is exacerbating a trend. Bush started the whole "post-Truth" society long before Trump was a thing, but Trump seemed to accelerate it, and maybe the cart is being put before the horse here: maybe the fact the last 10 years have been people being persuaded to get angry about things that aren't true, from non-existent sex changes on minors to 5G chips in vaccines, has meant the bar has lowered and LLMs being touted as a source of information has become something that would have been laughed at 20 years ago, even at similar levels of development, but is now taken seriously.

      • You very concisely put into words the idea that I was vaguely hinting at. Thank you.
      • Re: (Score:2, Insightful)

        > AI is exacerbating a trend. Bush started the whole "post-Truth" society long before Trump was a thing

        Brilliant point! I wasn’t aware republicans led the design of LLMs and particularly their RLHF (“safety” and “politeness” training). How is it possible for people to be so unaware of the slant of the latter’s deliberate both siding, “politeness”, and sycophancy?

        The deep irony here is that it’s coastal urban academic progressive Critical Theory - with

      • by mjwx ( 966435 )

        AI is exacerbating a trend. Bush started the whole "post-Truth" society long before Trump was a thing, but Trump seemed to accelerate it, and maybe the cart is being put before the horse here: maybe the fact the last 10 years have been people being persuaded to get angry about things that aren't true, from non-existent sex changes on minors to 5G chips in vaccines, has meant the bar has lowered and LLMs being touted as a source of information has become something that would have been laughed at 20 years ago, even at similar levels of development, but is now taken seriously.

        It really started before Bush when organisations like Fox News became accepted as "news". Something that lies that brazenly taken by millions as fact for so long that they no longer recognise the difference between fact and fallacy. It's gotten so bad that many Americans are turning their back on Fox because it's not extreme enough any more. There have been several attempts to start similar organisations in other western nations, Sky News Australia as well as several in the UK (GBNews, TalkTV) but find the

      • post-truth began with nixon and the institution of the petrodollar

    • Like why Republicans love those things so much. giving Russia a pass.

      Exactly. The NYT, Obama, H Clinton, and Biden have ALWAYS cited the TRUTH about Putin! They don’t love stupidity! They oppose it! In fact they spent many years poo pooing “stupid” warnings about him - calling it “Cold War” dinosaur thinking. So they, very intelligently owned the Republicans! They categorically refused to arm Ukrainians, gave Putin a major natural gas pipeline, plus used him as an intermediary to negotiate funnelling billions to Iran. Smart! And they learned the

      • by WaffleMonster ( 969671 ) on Sunday March 15, 2026 @07:25PM (#66043200)

        Trump is obviously a fascist: he coddled Iran, didnâ(TM)t arm Ukrainians, didnâ(TM)t stop the pipeline, and didnâ(TM)t force Europe to step up to its defense. Oh wait, he did. Long BEFORE your heroes switched direction.

        The way people conflate issues, ignore facts and paper over reality as you have done is crazy to watch.

        When the full scale war started in 2022 it was Biden who sent arms and lobbied congress to appropriate funding to send more. During Biden's administration Trump spoke out against and torpedoed congressional approval for more arms to Ukraine leading to shortfalls of critically needed ammunition that negatively impacted the war effort.

        Biden made sure to rush deliver all weapons he could by the end of his term for fear Trump would block even congressionally appropriated arms. Trump not only didn't even try to appropriate any funds for additional arms when our allies took over funding weapons shipments under PURL et el he publicly shit on Ukraine and levied a 10% war profiteering tax on our European allies who were buying American weapons for Ukraine. Trump is also still illegally blocking hundreds of millions in congressionally appropriated funds for energy assistance.

        BTW Trump didn't arm Ukraine it was congress in 2019 that appropriated 250 million "to provide assistance, including training; equipment; lethal assistance; logistics support, supplies and services; sustainment; and intelligence support to the military and national security forces of Ukraine."

        Trump is the motherfucker who during his first term illegally sat on that appropriated assistance and refused to send it.

        "In the summer of 2019, the Office of Management and Budget (OMB) withheld from obligation funds appropriated to the Department of Defense (DOD) for security assistance to Ukraine. In order to withhold the funds, OMB issued a series of nine apportionment schedules with footnotes that made all unobligated balances unavailable for obligation.

        Faithful execution of the law does not permit the President to substitute his own policy priorities for those that Congress has enacted into law. "

        https://www.gao.gov/products/b... [gao.gov]

        Then he later turned around and claimed it was his idea to send weapons in the first place.

        "Russians make up a pretty disproportionate cross-section of a lot of our assets" ~Don Jr

        "We have all the funding we need out of Russia" ~Eric Trump

        • Let’s just concentrate on the main point. You wrote,

          > When the full scale war started in 2022 it was Biden who sent arms

          That’s moving the goalposts. The pattern of Democrat urging executive branch appeasement started in 2007 with its presidential candidates openly calling republicans naïve. This subsequently crossed into many vectors (funding Iran, giving Putin a pipeline, no arms shipments, opposing Saudis, opposing Israel, taking Houthis off terrorism lists, etc). Republicans didn

          • Letâ(TM)s just concentrate on the main point. You wrote, "When the full scale war started in 2022 it was Biden who sent arms" Thatâ(TM)s moving the goalposts.

            You made a series of assertions "They categorically refused to arm Ukrainians" and "Trump is obviously a fascist: he coddled Iran, didnâ(TM)t arm Ukrainians,"

            When I point out your full of shit by citing relevant irrefutable facts directly responsive to your assertions the response is I'm moving the goalposts.

            The pattern of Democrat urging executive branch appeasement started in 2007 with its presidential candidates openly calling republicans naÃve.

            IDGAF about feelings or who called who what. In 2007 there was no Russian occupation of Georgia or Crimea. The world was operating with a radically different set of facts.

            They in fact, armed Ukrainians during Trump 1.

            Republicans were the one

  • Like sycophants (Score:4, Insightful)

    by John Allsup ( 987 ) on Sunday March 15, 2026 @12:35PM (#66042608) Homepage Journal
    It's like the way being surrounded by sycophants fuels a dictator's delusions. The first golden rule of using AI is that you must, must, must, verify what they say, and you must therefore have a means to verify what they say. If not, then the unit comprising of you and them turns into an AI feeding itself its own output, and model collapse occurs (or at least something like model collapse). On the human side this manifests as delusional thinking, since the garbage output of a model-collapsed AI has been burned into their brain.
    • I love Rufus, Amazon's chatbot. It starts every response with "You are absolutely right" then tells me why I am wrong. What a sycophant!

      • by gweihir ( 88907 )

        If it tells you why you are wrong without you prompting it to, then it seems to be significantly better than most other offers.

        • It is not better. It just wants to sell you stuff. It is one of the worst chatbots in terms of getting your question answered. You would need a much better chatbot to sort through amazon reviews and make sense of product defects. Not just a price mining chatbot that they're trying to block.

          • by gweihir ( 88907 )

            Interesting. So it does not actually tell you how you are wrong (with explanation), but that you should want to buy some stuff? That is pretty bad.

            Sorry for the misunderstanding, I generally keep a safe distance to chatbots these days, except for the occasional search. I have some students currently finding hilariously bad incompetence in some of the paid offers though. And I follow the other research into the problems.

    • by gweihir ( 88907 )

      Somebody here recently called LLM-generated code "review resistant". I think the concept is more general. LLM statements are review resistant in general, and sadly, they are intentionally crafted that way, because it increases "engagement" and fuels the hype. About as moral as pushing drugs and probably even more destructive.

      But the thing is, this makes the one thing that it critically needed with LLM output, namely that review and verification, really hard and stressful to do. And it seems people are not e

  • Looking at world politics, delusonal thinking does not need chatbots to flourish.

    • by gweihir ( 88907 )

      True. But chatbots serve as amplifiers, accelerators and directors. And that makes them dangerous. I mean, not even an excessively violent and intrusive regime like Iran or North Korea (and budding dictatorships like the US) can get everybody to think the same crap. But using chatbots may just give these assholes the edge they have long since looked for and make the problem so much worse.

  • just look at all the AI simps and the digital reams of text psoted over the last 3 years hyperglazing this shit

  • Elon Musk

  • The idea that "normal" people are immune to delusions does somewhat fly in the face of research showing the incredible ease of inducing false memories, the research into mass hysteria (such as the Satanic Panic), and research into mob dynamics.

    I freely admit that I'll sometimes simply sit and chat with AI, because there really aren't many humans who have the capacity to hold conversations any more, and that puts me in an extremely high-risk group. But, honestly, the choices these days are AI (and risk becoming psychotic), social media (and risk becoming suicidal or psychotic), or hang out with the same sort of people who have done so much damage over time (and risk being suicidal), or... well... really, that's about it.

    There are no good options. The outcomes are bleak and, unless you are in a clique, that's how it is and how it has always been.

    • The word "admit" implies you have done something wrong. In this case you actually took the time to gain real experience and made an informed decision, rather than listening to the "I'm already an expert" fools. It is almost sad to see all the posts here where people argue that AI just [substitute complete lack of understanding and experience here]. They often don't understand the difference between AGI and AI, and argue against the former when the latter is what is being discussed. My favorite part is whe
    • by gweihir ( 88907 )

      Indeed. Normal people cannot fact check (about 10-15% of all pople can) and normal people cannot be convinced by rational argument (about 20% can be, apparently goes up to 30% if the topic is not important to them). That means normal people are irrational and inaccessible to truth. And that would mean they live in delusions.

      As to your personal experience, I think as long as you are careful with AI and firmly keep in mind its nature as a stochastic parrot, you will be fine. Essentially, if you do that, you a

      • by PPH ( 736903 )

        We may have to revisit the Mensa idea (club for people with IQ > 130) and place some tests of actual capability to reason and fact-check as entry-criterion.

        If such a test could be created. And be objective. Many Mensans are sought out because they have obtained the badge of "smart person". And then used (or an attempt made) as spokespersons for some whacky ideology. Also, this was supposed to be the role of the press. Do the fact checking and report the story to the general public. Is the press objective and unbiased? Not nearly enough. Lately, I've seen some discussion boards implement a badge system and label some of their users as "Community Influencer". Th

  • by SubmergedInTech ( 7710960 ) on Sunday March 15, 2026 @03:14PM (#66042892)

    See this slashdot article from a year ago: https://slashdot.org/story/25/... [slashdot.org]

    In a pair of studies involving more than 2,000 participants, the researchers found a 20 percent reduction in belief in conspiracy theories after participants interacted with a powerful, flexible, personalized GPT-4 Turbo conversation partner. The researchers trained the AI to try to persuade the participants to reduce their belief in conspiracies by refuting the specific evidence the participants provided to support their favored conspiracy theory.

    If you configure the tool to minimize delusional thinking, it does.

    Of course, if you configure the tool to maximize engagement, well...

    • by PPH ( 736903 )

      Of course, if you configure the tool to maximize engagement, well...

      So the implication is that this is intentional. So then my next question would be: Why?

      The greater public good would be to talk people down from their delusions. Unless the goal is to filter the susceptible individuals out, maintain their engagement and recruit them for some nefarious purpose. And then my next, next question would be: Who maintains these chatbots? And what motivates them? (OK. That's two more questions.)

  • OpenAI raised $110 billion.

    Absolutely delusional.
  • by sjames ( 1099 ) on Sunday March 15, 2026 @03:48PM (#66042952) Homepage Journal

    I would say that very very few people out there are not vulnerable to delusions. Entire sectors of our economy run on delusions.

  • This is a study that makes me feel the world indeed has changed. I can fathom asking an algorithm for emotional or life advice. I barely trust it with strategic and technical advice. Yet my kids are growing up with “smart” speakers in their rooms they use to control the lights, give them the weather, sometimes use to “help” them with their homework. Brave new world, I suppose.
  • So much potential for Funny. Maybe a bit dark, but still...

    <sound of crickets>

  • This part I do not understand:

    creating effective safeguards for delusional thinking could be tricky, Morrin said, because "when you work with people with beliefs of delusional intensity, if you directly challenge someone and tell them immediately that they're completely wrong, actually what's most likely is they'll withdraw from you and become more socially isolated". Instead, it's important to create a fine balance where you try to understand the source of the delusional belief without encouraging it â" that could be more than a chatbot can master.

    So all the LLM has to do is confront the patient. It will drive him away from Ai. This is good in this situation, as ai tends to make things worse. There, problem solved. Easy!

    • that would require dark pattern manipulation to not be the only business model on the entire planet

      • I do not think they do this intentional. This is just an unfortunate side effect for them.
        • Please be smarter. Talking with any LLM for about ten minutes should make it obvious that you have to work pretty hard to get it to stop selling the next prompt.

          • Judgmental today... Sure they want you hooked. They do not intentionally make it this way to push people with mental issues over the edge. It is a side effect.
            • I am not saying that the "AI psychosis" is an intentional effect, although in the US specifically you're not really going to convince me that isn't at least very slightly the case. I am saying that the dark pattern manipulation is every industry's primary business model at this point, and that has extremely predictable consequences that they don't care about at all.

              • Exploiting society's weaknesses for profit ... It is getting more and more a theme these days.
                • it's not a these days thing, this is what capitalism does, it's why every new thing is blamed for things getting worse, the worst people in the world always get it first

                  • It is human nature to blame this on one thing... We all made this happen. Looks like there is an awareness building up though. This may change for the better in a few years.
                    • I want to believe that, but it's really difficult. There are too many people still buried in partisan bullshit and capitalist realism if they even care at all. Capitalism is not just failing, it's effectively already failed. Unfortunately, it is currently most likely to be replaced with neofeudalism rather than any type of freedom.

                    • My way of coping: pay attention to people in real life, not the internet. They tend to be nice. Oh well... we'll see what happens...
                    • I don't know about that. People in person are polite. Try to get anything done that fixes any real problems and you'll eventually find out they're not nice.

                    • I changed jobs 6 years ago. Moved from tech industry to teaching in high school. I found out that there are a lot of nice people out there. It was a surprise. Of course, we are all small limited humans, we are rather silly. Oh well... I guess it depends on a lot. Even the weather.
  • by TheDarkMaster ( 1292526 ) on Sunday March 15, 2026 @08:38PM (#66043314)
    Delusional thinking has always existed. Religion, “god-kings” who believe they have a divine right to rule over everything and everyone, and most recently, narcissists who have decided they are women and want to force everyone to agree with them (It's like the people who think they're Napoleon, but now they want to force you to agree that they really are Napoleon).

    I believe the biggest problem with this and “AI” is that many people bought into the hype that “AI would always be right about everything,” and so they think it's true when “AI” confirms their delusions.
  • We're going to continue blaming the new thing for people being alienated from each other until each of us is born and dies in a single grey room without any idea there are other real humans in the world.

  • So, the only questoin is how soon we can replace CEOs with chatbots, since both espouse the same delusional garbage.

Progress means replacing a theory that is wrong with one more subtly wrong.

Working...