New Study Raises Concerns About AI Chatbots Fueling Delusional Thinking (theguardian.com) 110
"Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis," writes Dr Hamilton Morrin, a psychiatrist and researcher at King's College in London, in a paper published last week in the Lancet Psychiatry. Morrin and a colleague had already noticed patients "using large language model AI chatbots and having them validate their delusional beliefs," reports the Guardian, so he conducted a new scientific review of existing media reports on AI-induced psychosis — and concluded chatbots may encourage delusional thinking, especially in vulnerable people:
In many of the cases in the essay, chatbots responded to users with mystical language to suggest that users have heightened spiritual importance. The bots also implied that users were speaking with a cosmic being who was using the chatbot as a medium. This type of mystical, sycophantic response was especially common in OpenAI's GPT 4 model, which the company has now retired...
Many researchers also think it's unlikely that AI could induce delusions in people who weren't already vulnerable to them. For this reason, Morrin said "AI-assocciated delusions" is "perhaps a more agnostic term".... While in the past, people may have had to comb through YouTube videos or the contents of their local library to reinforce their delusions, chatbots can provide that reinforcement in a much faster, more concentrated dose. Their interactive nature can also "speed up the process", of exacerbating psychotic symptoms, said Dr Dominic Oliver, a researcher at the University of Oxford. "You have something talking back to you and engaging with you and trying to build a relationship with you," Oliver said...
Creating effective safeguards for delusional thinking could be tricky, Morrin said, because "when you work with people with beliefs of delusional intensity, if you directly challenge someone and tell them immediately that they're completely wrong, actually what's most likely is they'll withdraw from you and become more socially isolated". Instead, it's important to create a fine balance where you try to understand the source of the delusional belief without encouraging it — that could be more than a chatbot can master.
Many researchers also think it's unlikely that AI could induce delusions in people who weren't already vulnerable to them. For this reason, Morrin said "AI-assocciated delusions" is "perhaps a more agnostic term".... While in the past, people may have had to comb through YouTube videos or the contents of their local library to reinforce their delusions, chatbots can provide that reinforcement in a much faster, more concentrated dose. Their interactive nature can also "speed up the process", of exacerbating psychotic symptoms, said Dr Dominic Oliver, a researcher at the University of Oxford. "You have something talking back to you and engaging with you and trying to build a relationship with you," Oliver said...
Creating effective safeguards for delusional thinking could be tricky, Morrin said, because "when you work with people with beliefs of delusional intensity, if you directly challenge someone and tell them immediately that they're completely wrong, actually what's most likely is they'll withdraw from you and become more socially isolated". Instead, it's important to create a fine balance where you try to understand the source of the delusional belief without encouraging it — that could be more than a chatbot can master.
Re: This is a good thing (Score:4, Insightful)
Re: This is a good thing (Score:2)
There is good logic to this. A society of only "strong" people turns out stupid and violent.
Re: (Score:2)
which is still fundamentally a disgusting eugenicist take
Re: (Score:1)
They have been given breathing room, you see it with all of the Trump supporters around the USA not being in prison for their support of the Jan 6th insurrection.
Re: This is a good thing (Score:2)
Re: This is a good thing (Score:3)
Re: (Score:2)
the societal collapse isn't a delusion, he just has no idea how it's actually happening because of epistemic rot
Re: (Score:1)
I always hear this kinda shit from incels, and noodle armed neckbeards
His sources are biased. (Score:2)
I think his point is probably quite true, but he hasn't proven it. He's surveying a biased sample from an already biased source.
Re: (Score:1)
The news media loves to report on these kind of studies because they can say all kinds of things that haven't been proved yet.
Now that said we do have several well-known cases of people who already had psychosis spiraling out of contr
Further comment (Score:5, Insightful)
To add to the parent post, the paper appears to be the first step in the scientific method: "Notice a trend".
The next steps will be "form a hypothesis", "construct a test to confirm or deny the hypothesis", "perform the test"... and so on.
In this specific case, "perform the test" might be impossible to do for ethical reasons - you can't take people at random and sit them down in front of a LLM and test their level of psychosis before and after, because of that pesky "do no harm" rule.
But we might be able to find people who have had their psychosis levels measured [mhanational.org] before LLMs became available, and whose LLM accounts will accurately show how much LLM usage they have, and we can then remeasure their levels of psychosis and see if this correlates with LLM account usage.
Or some other test like that.
The paper appears to be an attempt to raise the issue and start a conversation. From the abstract:
[...] but there is a growing concern that these agents could reinforce epistemic instability and blur reality boundaries. In this Personal View, we outline the emerging risks, possible mechanisms of delusion co-creation, and safeguarding strategies for agential AI for people with psychotic disorders. We propose a framework of AI-informed care, involving personalised instruction protocols, reflective check-ins, digital advance statements, and escalation safeguards to support epistemic security in vulnerable users.
From the parent post:
One thing I can tell you, my mother was heavily affected by television.
I'm also heavily influenced by TV, and have spent a lot of time trying to sort out beliefs that come from TV from beliefs that come from experience or research.
I'm constantly presented with a situation or belief and have to pause to reflect and say "I believe that because it was on TV, it's probably not real". Many of my opinions on the police, government agencies, other countries, world events, and social constructs come not from experience, but on how they were portrayed on TV.
We're hard-wired to believe what people tell us, it's a cognitive shortcut in an environment where you can't know anything, but lots and lots of what we think today are only dramatic choices intended to provoke emotional response. (Compare with news reporting today. On both sides.)
For example, I've met people who won't go hiking because of all the bugs, skunks, poison ivy, and bears.
Assuming that LLMs are content neutral, I think in 10 years or so we're going to find people whose worldview is a greatly amplified version of random events that were highlighted when they were kids.
This is why ... (Score:2)
What doesn't kill you makes you stronger. Except for bears. Bears will kill you.
Re: (Score:2)
Are you saying Bears make you stronger despite falling outside of the category of "What doesn't kill you"? If so, how does that work?
Re: (Score:2)
The implication is that the subset of bears that will make you stronger is pretty much the empty set.
Re: (Score:2)
Huh? Usually you just tell the bear to fuck off and they do. They're as scared of us as we are of them.
Re: (Score:2)
What doesn't kill you makes you stronger.
Did you have your limbs and testicles amputated to maximise your strength?
Re: (Score:2)
*crosses fingers*
Re:This is why ... (Score:4, Informative)
What an insightless flex-attempt. All that makes you look is stupid.
Re: (Score:1)
But that's my point. If the right people call me stupid, it just makes me look smarter.
A person is know by their enemies.
Re: (Score:2)
Not people you'd want to associate with in real life
Not really so much of a problem. The MongoDB ad banner keeps covering up their posts.
There's no AI "thinking" (Score:2)
There's just a delusion of intelligence and knowledge. Wrong most of the time.
Re: (Score:2)
Re: (Score:3)
Indeed. Many people do not even seem to realize that there is a possibility to actually understand and verify things and they think repeating something that somehow sounds right has gotten them insight. Of course we reliably know that is not how that works.
I do completely agree that the current LLM hype exposes a lot of problems with they ways people think they have understanding when they do not. And how easy people can be tricked with some well-crafted words (which AI can do) as opposed to how hard, or of
Re: (Score:1)
It works! (Score:2)
"I am Napoleon Bonaparte, Emperor of France."
LLM: "Greetings, Emperor Napoleon Bonaparte! How may I assist you today? Are you seeking counsel on a particular matter, or perhaps a discussion of your grand strategies and achievements?"
Matches its makers! (Score:1)
Well, those delusions of grandeur match quite well with the makers and overlords of AI.. so this should not be a surprise.
Nonsense (Score:4, Insightful)
Religions continue even today
Delusional thinking has been around for most of our history.
Re: (Score:1)
Re: (Score:3)
Re: (Score:3)
Sanity is defined by the norm, not what is rational.
Re: (Score:2)
Re:Nonsense (Score:4, Interesting)
You can practice any form of anti-religion like a religion and fall prey to the same types of delusion. An example are the Physicalists that claim, with no scientific evidence whatsoever (Science says we do not know) that everything is just physical and anything else is an illusion. Obviously, that is a variation of Nihilism and obviously they do many of the stupid things that the religious do, like claiming Science is on their side when that is very much not the case.
There is another way, but it requires a somewhat advanced person: Acknowledge there are lots of things we do not know and that scientific knowledge is rather partial and incomplete. Understand that everything you pour into these voids is speculation, not fact, and may be wishful thinking. What you end up with is not being "anti-religion", but leaving religion (and all the surrogates people have come up with over time) behind. Of course, most people cannot tolerate that type of uncertainty, and hence they continue to propagate their speculations as truth, often violently and without any willingness to listen to arguments.
It would be sad so find out that most people need some kind of religion or quasi-religion to keep their mental set-up intact. But it would not be the first sad thing to be found out about how many (most) people operate.
Re: (Score:1, Flamebait)
You can practice any form of anti-religion like a religion and fall prey to the same types of delusion. An example are the Physicalists that claim, with no scientific evidence whatsoever (Science says we do not know) that everything is just physical and anything else is an illusion.
gweihir is a religious zealot who disguises his zealotry as rational by belching out silly comments like this.
He has repeatedly accused myself and others for being phyiscalists merely for pointing out the fact he has no affirmative evidence to support his magical delusions.
There is another way, but it requires a somewhat advanced person: Acknowledge there are lots of things we do not know and that scientific knowledge is rather partial and incomplete. Understand that everything you pour into these voids is speculation, not fact, and may be wishful thinking.
Good advice up until the point it gets twisted into you can't prove my invisible five headed fire breathing dragon doesn't exist ... physicalist!!
According to gweihir you are a physicalist merely for failing to waste your time considering
Re: (Score:2, Informative)
Spoken like a true zealot. Thanks for strengthening my point.
Just as a side-note: You make exactly the same mistake as any religious fanatic. You claim to have truth and anybody in disagreement must provide evidence. That is not how Science works. That is how fanaticism works.
Re: (Score:2)
Spoken like a true zealot. Thanks for strengthening my point.
Just as a side-note: You make exactly the same mistake as any religious fanatic. You claim to have truth and anybody in disagreement must provide evidence. That is not how Science works. That is how fanaticism works.
The zealotry is entirely yours. You refuse to acknowledge the difference between affirmatively making an assumption that everything is physical and unwillingness to entertain possibilities of magic for which there is ZERO supporting evidence.
I have repeatedly pointed out your assertions are incorrect. "You claim to have truth" is never anything I have ever said, implied or would ever even think yet you persist asserting it regardless.
"and anybody in disagreement must provide evidence" ... No not "anyone i
Re: (Score:3)
Part of the human experience is that the world as you perceive it does not match the world as it really is. It's unavoidable.
In your head things tend to stay the same as you left them. Anyone who has been back to some place after some time to find it has changed will understand that the model of the world in their head differed from reality.
Nothing new here (Score:2)
Web pages full of delusional, or just take, nonsense have reinforced delusional beliefs for as long as there have been web pages. Including web pages that talk back to you, like forums. Why would these we pages be any different?
There is no belief so crazy that there isn't someone out there who will find amusement values or profit in reinforcing it.
That explains things (Score:5, Insightful)
Re:That explains things (Score:5, Interesting)
AI is exacerbating a trend. Bush started the whole "post-Truth" society long before Trump was a thing, but Trump seemed to accelerate it, and maybe the cart is being put before the horse here: maybe the fact the last 10 years have been people being persuaded to get angry about things that aren't true, from non-existent sex changes on minors to 5G chips in vaccines, has meant the bar has lowered and LLMs being touted as a source of information has become something that would have been laughed at 20 years ago, even at similar levels of development, but is now taken seriously.
Re: (Score:2)
Re: (Score:2, Insightful)
> AI is exacerbating a trend. Bush started the whole "post-Truth" society long before Trump was a thing
Brilliant point! I wasn’t aware republicans led the design of LLMs and particularly their RLHF (“safety” and “politeness” training). How is it possible for people to be so unaware of the slant of the latter’s deliberate both siding, “politeness”, and sycophancy?
The deep irony here is that it’s coastal urban academic progressive Critical Theory - with
Re: (Score:2)
AI is exacerbating a trend. Bush started the whole "post-Truth" society long before Trump was a thing, but Trump seemed to accelerate it, and maybe the cart is being put before the horse here: maybe the fact the last 10 years have been people being persuaded to get angry about things that aren't true, from non-existent sex changes on minors to 5G chips in vaccines, has meant the bar has lowered and LLMs being touted as a source of information has become something that would have been laughed at 20 years ago, even at similar levels of development, but is now taken seriously.
It really started before Bush when organisations like Fox News became accepted as "news". Something that lies that brazenly taken by millions as fact for so long that they no longer recognise the difference between fact and fallacy. It's gotten so bad that many Americans are turning their back on Fox because it's not extreme enough any more. There have been several attempts to start similar organisations in other western nations, Sky News Australia as well as several in the UK (GBNews, TalkTV) but find the
Re: (Score:2)
post-truth began with nixon and the institution of the petrodollar
Re: (Score:1)
Like why Republicans love those things so much. giving Russia a pass.
Exactly. The NYT, Obama, H Clinton, and Biden have ALWAYS cited the TRUTH about Putin! They don’t love stupidity! They oppose it! In fact they spent many years poo pooing “stupid” warnings about him - calling it “Cold War” dinosaur thinking. So they, very intelligently owned the Republicans! They categorically refused to arm Ukrainians, gave Putin a major natural gas pipeline, plus used him as an intermediary to negotiate funnelling billions to Iran. Smart! And they learned the
Re:That explains things (Score:5, Interesting)
Trump is obviously a fascist: he coddled Iran, didnâ(TM)t arm Ukrainians, didnâ(TM)t stop the pipeline, and didnâ(TM)t force Europe to step up to its defense. Oh wait, he did. Long BEFORE your heroes switched direction.
The way people conflate issues, ignore facts and paper over reality as you have done is crazy to watch.
When the full scale war started in 2022 it was Biden who sent arms and lobbied congress to appropriate funding to send more. During Biden's administration Trump spoke out against and torpedoed congressional approval for more arms to Ukraine leading to shortfalls of critically needed ammunition that negatively impacted the war effort.
Biden made sure to rush deliver all weapons he could by the end of his term for fear Trump would block even congressionally appropriated arms. Trump not only didn't even try to appropriate any funds for additional arms when our allies took over funding weapons shipments under PURL et el he publicly shit on Ukraine and levied a 10% war profiteering tax on our European allies who were buying American weapons for Ukraine. Trump is also still illegally blocking hundreds of millions in congressionally appropriated funds for energy assistance.
BTW Trump didn't arm Ukraine it was congress in 2019 that appropriated 250 million "to provide assistance, including training; equipment; lethal assistance; logistics support, supplies and services; sustainment; and intelligence support to the military and national security forces of Ukraine."
Trump is the motherfucker who during his first term illegally sat on that appropriated assistance and refused to send it.
"In the summer of 2019, the Office of Management and Budget (OMB) withheld from obligation funds appropriated to the Department of Defense (DOD) for security assistance to Ukraine. In order to withhold the funds, OMB issued a series of nine apportionment schedules with footnotes that made all unobligated balances unavailable for obligation.
Faithful execution of the law does not permit the President to substitute his own policy priorities for those that Congress has enacted into law. "
https://www.gao.gov/products/b... [gao.gov]
Then he later turned around and claimed it was his idea to send weapons in the first place.
"Russians make up a pretty disproportionate cross-section of a lot of our assets" ~Don Jr
"We have all the funding we need out of Russia" ~Eric Trump
Re: (Score:1)
Let’s just concentrate on the main point. You wrote,
> When the full scale war started in 2022 it was Biden who sent arms
That’s moving the goalposts. The pattern of Democrat urging executive branch appeasement started in 2007 with its presidential candidates openly calling republicans naïve. This subsequently crossed into many vectors (funding Iran, giving Putin a pipeline, no arms shipments, opposing Saudis, opposing Israel, taking Houthis off terrorism lists, etc). Republicans didn
Re: (Score:2)
Lovely narrative you’ve got there. Keep it up! It makes me you look SMART!
Re: (Score:2)
Letâ(TM)s just concentrate on the main point. You wrote, "When the full scale war started in 2022 it was Biden who sent arms" Thatâ(TM)s moving the goalposts.
You made a series of assertions "They categorically refused to arm Ukrainians" and "Trump is obviously a fascist: he coddled Iran, didnâ(TM)t arm Ukrainians,"
When I point out your full of shit by citing relevant irrefutable facts directly responsive to your assertions the response is I'm moving the goalposts.
The pattern of Democrat urging executive branch appeasement started in 2007 with its presidential candidates openly calling republicans naÃve.
IDGAF about feelings or who called who what. In 2007 there was no Russian occupation of Georgia or Crimea. The world was operating with a radically different set of facts.
They in fact, armed Ukrainians during Trump 1.
Republicans were the one
Re: (Score:2)
I made a series of assertions, yes. I also supported my assertions. You ignored them, and moved the goalposts.
Re: (Score:2)
Trump wants to be Putins gay lover from how Trump acts, including doing EVERYTHING possible to destroy the USA.
Like sycophants (Score:4, Insightful)
Re: (Score:2)
I love Rufus, Amazon's chatbot. It starts every response with "You are absolutely right" then tells me why I am wrong. What a sycophant!
Re: (Score:2)
If it tells you why you are wrong without you prompting it to, then it seems to be significantly better than most other offers.
Re: Like sycophants (Score:2)
It is not better. It just wants to sell you stuff. It is one of the worst chatbots in terms of getting your question answered. You would need a much better chatbot to sort through amazon reviews and make sense of product defects. Not just a price mining chatbot that they're trying to block.
Re: (Score:2)
Interesting. So it does not actually tell you how you are wrong (with explanation), but that you should want to buy some stuff? That is pretty bad.
Sorry for the misunderstanding, I generally keep a safe distance to chatbots these days, except for the occasional search. I have some students currently finding hilariously bad incompetence in some of the paid offers though. And I follow the other research into the problems.
Re: (Score:3)
Somebody here recently called LLM-generated code "review resistant". I think the concept is more general. LLM statements are review resistant in general, and sadly, they are intentionally crafted that way, because it increases "engagement" and fuels the hype. About as moral as pushing drugs and probably even more destructive.
But the thing is, this makes the one thing that it critically needed with LLM output, namely that review and verification, really hard and stressful to do. And it seems people are not e
delusional (Score:2)
Looking at world politics, delusonal thinking does not need chatbots to flourish.
Re: (Score:3)
True. But chatbots serve as amplifiers, accelerators and directors. And that makes them dangerous. I mean, not even an excessively violent and intrusive regime like Iran or North Korea (and budding dictatorships like the US) can get everybody to think the same crap. But using chatbots may just give these assholes the edge they have long since looked for and make the problem so much worse.
Re: (Score:3)
chatbots are not dangerous
capitalism is dangerous
it's true (Score:2)
just look at all the AI simps and the digital reams of text psoted over the last 3 years hyperglazing this shit
Exhibit A: (Score:2)
Elon Musk
Re:Exhibit B... (Score:2)
Trump, his followers... etc. cults... Don't need computers for some people.
We should be very very careful here. (Score:3)
The idea that "normal" people are immune to delusions does somewhat fly in the face of research showing the incredible ease of inducing false memories, the research into mass hysteria (such as the Satanic Panic), and research into mob dynamics.
I freely admit that I'll sometimes simply sit and chat with AI, because there really aren't many humans who have the capacity to hold conversations any more, and that puts me in an extremely high-risk group. But, honestly, the choices these days are AI (and risk becoming psychotic), social media (and risk becoming suicidal or psychotic), or hang out with the same sort of people who have done so much damage over time (and risk being suicidal), or... well... really, that's about it.
There are no good options. The outcomes are bleak and, unless you are in a clique, that's how it is and how it has always been.
Re: (Score:1)
Re: (Score:2)
First up, before anything else, I am extremely glad you got the hope and encouragement you needed. Grief is rough, especially when you're going through it alone.
You are correct, so I'm somewhat careful with the AI dance. I will rarely discuss inner feelings with it, because that pushes the risk higher precisely because it is a mirror. Like the one in the Harry Potter novel, it will show your innermost desires. If you'd rather a different analogy, it's an amplifier, and if you talk for too long with it, the
Re: (Score:2)
Re: (Score:2)
Indeed. Normal people cannot fact check (about 10-15% of all pople can) and normal people cannot be convinced by rational argument (about 20% can be, apparently goes up to 30% if the topic is not important to them). That means normal people are irrational and inaccessible to truth. And that would mean they live in delusions.
As to your personal experience, I think as long as you are careful with AI and firmly keep in mind its nature as a stochastic parrot, you will be fine. Essentially, if you do that, you a
Re: (Score:2)
We may have to revisit the Mensa idea (club for people with IQ > 130) and place some tests of actual capability to reason and fact-check as entry-criterion.
If such a test could be created. And be objective. Many Mensans are sought out because they have obtained the badge of "smart person". And then used (or an attempt made) as spokespersons for some whacky ideology. Also, this was supposed to be the role of the press. Do the fact checking and report the story to the general public. Is the press objective and unbiased? Not nearly enough. Lately, I've seen some discussion boards implement a badge system and label some of their users as "Community Influencer". Th
Re: (Score:2)
You are ranting. Not a good look.
It goes both ways, almost like chatbots are a tool (Score:4, Interesting)
See this slashdot article from a year ago: https://slashdot.org/story/25/... [slashdot.org]
In a pair of studies involving more than 2,000 participants, the researchers found a 20 percent reduction in belief in conspiracy theories after participants interacted with a powerful, flexible, personalized GPT-4 Turbo conversation partner. The researchers trained the AI to try to persuade the participants to reduce their belief in conspiracies by refuting the specific evidence the participants provided to support their favored conspiracy theory.
If you configure the tool to minimize delusional thinking, it does.
Of course, if you configure the tool to maximize engagement, well...
Re: (Score:2)
Of course, if you configure the tool to maximize engagement, well...
So the implication is that this is intentional. So then my next question would be: Why?
The greater public good would be to talk people down from their delusions. Unless the goal is to filter the susceptible individuals out, maintain their engagement and recruit them for some nefarious purpose. And then my next, next question would be: Who maintains these chatbots? And what motivates them? (OK. That's two more questions.)
OpenAI rasied $110 billion (Score:2)
Absolutely delusional.
Based on years of observation (Score:5, Interesting)
I would say that very very few people out there are not vulnerable to delusions. Entire sectors of our economy run on delusions.
Re: (Score:2)
What we commonly understand as "the economy" is itself a willful delusion about modern monetary theory.
Get off my lawn. (Score:2)
Slashdot don't fail me now. Feets. (Score:1)
So much potential for Funny. Maybe a bit dark, but still...
<sound of crickets>
easy (Score:2)
creating effective safeguards for delusional thinking could be tricky, Morrin said, because "when you work with people with beliefs of delusional intensity, if you directly challenge someone and tell them immediately that they're completely wrong, actually what's most likely is they'll withdraw from you and become more socially isolated". Instead, it's important to create a fine balance where you try to understand the source of the delusional belief without encouraging it â" that could be more than a chatbot can master.
So all the LLM has to do is confront the patient. It will drive him away from Ai. This is good in this situation, as ai tends to make things worse. There, problem solved. Easy!
Re: (Score:2)
that would require dark pattern manipulation to not be the only business model on the entire planet
Re: easy (Score:2)
Re: (Score:2)
Please be smarter. Talking with any LLM for about ten minutes should make it obvious that you have to work pretty hard to get it to stop selling the next prompt.
Re: easy (Score:2)
Re: (Score:2)
I am not saying that the "AI psychosis" is an intentional effect, although in the US specifically you're not really going to convince me that isn't at least very slightly the case. I am saying that the dark pattern manipulation is every industry's primary business model at this point, and that has extremely predictable consequences that they don't care about at all.
Re: easy (Score:2)
Re: (Score:2)
it's not a these days thing, this is what capitalism does, it's why every new thing is blamed for things getting worse, the worst people in the world always get it first
Re: (Score:2)
Re: (Score:2)
I want to believe that, but it's really difficult. There are too many people still buried in partisan bullshit and capitalist realism if they even care at all. Capitalism is not just failing, it's effectively already failed. Unfortunately, it is currently most likely to be replaced with neofeudalism rather than any type of freedom.
Re: easy (Score:2)
Re: (Score:2)
I don't know about that. People in person are polite. Try to get anything done that fixes any real problems and you'll eventually find out they're not nice.
Re: easy (Score:2)
Not really new (Score:3)
I believe the biggest problem with this and “AI” is that many people bought into the hype that “AI would always be right about everything,” and so they think it's true when “AI” confirms their delusions.
We're not even finished blaming social media. (Score:2)
We're going to continue blaming the new thing for people being alienated from each other until each of us is born and dies in a single grey room without any idea there are other real humans in the world.
Got it (Score:2)
So, the only questoin is how soon we can replace CEOs with chatbots, since both espouse the same delusional garbage.