Forgot your password?
typodupeerror
Science

'Cognitive Surrender' Leads AI Users To Abandon Logical Thinking, Research Finds (arstechnica.com) 137

An anonymous reader quotes a report from Ars Technica: When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine. Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in "cognitive surrender" to AI's seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.

Overall, across 1,372 participants and over 9,500 individual trials, the researchers found subjects were willing to accept faulty AI reasoning a whopping 73.2 percent of the time, while only overruling it 19.7 percent of the time. The researchers say this "demonstrate[s] that people readily incorporate AI-generated outputs into their decision-making processes, often with minimal friction or skepticism." In general, "fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation," they write. These kinds of effects weren't uniform across all test subjects, though. Those who scored highly on separate measures of so-called fluid IQ were less likely to rely on the AI for help and were more likely to overrule a faulty AI when it was consulted. Those predisposed to see AI as authoritative in a survey, on the other hand, were much more likely to be led astray by faulty AI-provided answers.

Despite the results, though, the researchers point out that "cognitive surrender is not inherently irrational." While relying on an LLM that's wrong half the time (as in these experiments) has obvious downsides, a "statistically superior system" could plausibly give better-than-human results in domains such as "probabilistic settings, risk assessment, or extensive data," the researchers suggest. "As reliance increases, performance tracks AI quality," the researchers write, "rising when accurate and falling when faulty, illustrating the promises of superintelligence and exposing a structural vulnerability of cognitive surrender." In other words, letting an AI do your reasoning means your reasoning is only ever going to be as good as that AI system. As always, let the prompter beware.

'Cognitive Surrender' Leads AI Users To Abandon Logical Thinking, Research Finds

Comments Filter:
  • by Ender_Wiggin ( 180793 ) on Saturday April 04, 2026 @10:05AM (#66076896)

    "On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine"

    I've run into these people, they're the worst. It was bad enough dealing with people whose mindset was 'If I cant find it on google then it doesn't exist,' and it seems these people have moved into AI and gotten dumber but think they're even smarter.

    • by jobslave ( 6255040 ) on Saturday April 04, 2026 @10:06AM (#66076898)

      There's already a name for this phenomenon. MAGA

      • by Anonymous Coward on Saturday April 04, 2026 @10:42AM (#66076972)

        You’re not wrong. Remember when they kept saying Kamala would start a war?

        Now the orange tub of shit started one himself and it’s totally different and necessary. They also all of a sudden care about the people of Iran.

        • by mjwx ( 966435 )

          You’re not wrong. Remember when they kept saying Kamala would start a war?

          Now the orange tub of shit started one himself and it’s totally different and necessary. They also all of a sudden care about the people of Iran.

          I figured out years ago that what the far right claims the other side is going to do (or doing) is exactly what they intend to do.

      • There's already a name for this phenomenon. MAGA

        This comment is modded both funny and troll. The true mod should be insightful. However, it's not just MAGA adherents but also people on the left and even apolitical people. Many people surrender their skepticism in politics, science, history, and particularly economics. This is why social media and marketing are so effective, and this phenomenon exited way before AI was a thing.

    • Or worse, because it is on the Internet, it must be true. I had a friend whose entire argument that some conspiracy theory was true because multiple people posted things on websites. I countered that I could set up a website to detail how that friend murdered a homeless person one summer.
    • by gweihir ( 88907 )

      They probably have not gotten dumber, but they definitely think they are smarter and finally have reliable truth at their disposal.

      The thing is, these people are the majority (!) of the human race at around 80%. The reason actually mentally capable people see them less often is filter effects. But think of some random relatives, which you probably have minimized or cut contact with.

  • by gurps_npc ( 621217 ) on Saturday April 04, 2026 @10:10AM (#66076904) Homepage

    I think what is really going on is that is not 'fluid IQ', but regular, normal "IQ".

    That is, stupid people either do not realize the AI is wrong, or more likely, they are so used to being corrected by more intelligent people that they just assume the AI must be smarter than they are and do not challenge it.

    I can also see a small number of submissive/shy/apathetic people just accepting the wrong information and thinking it is not worth fixing.

    This kind of thing gets me so mad that I would never just accept that.

    • I think what is really going on is that is not 'fluid IQ', but regular, normal "IQ".

      "Fluid" intelligence is the ability to think, reason, solve problems, and learn things. "Crystallized" intelligence is your amassed knowledge.

      These are technical terms used in the literature.

      Intelligence is nature's guess as to how complex your environment will be... but there's an out. People with low fluid intelligence have to work harder to understand things, but if they put in the work they can amass a body of knowledge that rivals that of people with high fluid intelligence.

      And of course, lots of peopl

    • by gweihir ( 88907 )

      "Fluid IQ" is just how much of your IQ you actually use. The term was probably invented to not have to tell high IQ people that are not independent thinkers (quite a lot, probably a majority) that they are effectively pretty dump and mentally incapable.

    • by HiThere ( 15173 )

      I don't think it's directly related to IQ. I also don't think it's restricted to chatbots. A lot of people are willing to accept the opinion of any authoritative source that they've accepted. Think religion or political party. Once they accept it, they stop questioning it's proclamations.

      Note that this also applied to those who accept the proclamations of scientists or compilers. Once you accept an authoritative source, you pretty much stop questioning it. It's been multiple decades since I really arg

  • by Ragnarok89 ( 1066010 ) on Saturday April 04, 2026 @10:14AM (#66076906)

    ... So I don't have to. I assume it's correct.

  • New religion (Score:3, Interesting)

    by Calydor ( 739835 ) on Saturday April 04, 2026 @10:15AM (#66076908)

    I would really like to see a study trying to correlate being religious to believing whatever the AI tells you. I suspect there's a strong overlap but that's just a gut feeling; I'd love to see it actually tested.

    • Re:New religion (Score:4, Interesting)

      by ClickOnThis ( 137803 ) on Saturday April 04, 2026 @11:07AM (#66077014) Journal

      That would indeed be an interesting study.

      Religions generally accept wisdom from sacred texts. (Yes, I know there are exceptions.) So one would presume that those who are ready to accept information on the authority of sacred texts would accept it from an AI that is perceived as an authority.

      On the other hand, those same religious people could recognize that AI is distinct from their religious texts, and apply a different standard to it.

      • by dfghjk ( 711126 )

        "Religions generally accept wisdom from sacred texts."

        This is false. Religions CREATE privileged texts, which they call "sacred texts" or scriptures, which contain stories that are fabricated. Religions do not "accept wisdom" from these created texts because religions create those texts.

        Now, parishioners could be said to "generally accept wisdom from sacred texts." Perhaps that is what you meant. Religions are a mechanism to control people, scripture is a tool that is used.

        Personally, I think the entire

        • Now, parishioners could be said to "generally accept wisdom from sacred texts." Perhaps that is what you meant. Religions are a mechanism to control people, scripture is a tool that is used.

          Yes, that is in fact what I meant. Thanks for clarifying.

    • Re:New religion (Score:5, Interesting)

      by dvice ( 6309704 ) on Saturday April 04, 2026 @11:07AM (#66077018)

      "Thinking about God increases acceptance of artificial intelligence in decision-making"
      https://pmc.ncbi.nlm.nih.gov/a... [nih.gov]

      • by Calydor ( 739835 )

        Well whaddya know.

        Thanks, an interesting read.

      • by PPH ( 736903 )

        Oblig joke: Computer scientists were putting the finishing touches on their first fully intelligent machine. Once booted up, they decided to test it with deeply philosophical question: "Is there a God?"

        "There is now."

      • I strongly have to disagree, albeit only on a personal level. I believe in God, and you can make all the fun of that you want.
        But using AI for decision making? Not even remotely. I can try to tell myself that AI in and of itself is just a tool. And as a tool I've used it. It proved useful for translating resource strings into another language and for looking up some stuff like how to use the fmt lib in c++.
        But seeing how in just the last two years AI-generated content took over almost everything (youtube, s

    • by gweihir ( 88907 )

      There is one thing: About 10-15% of the population are independent thinkers and about 20-25% (including the former) can be convinced by rational argument. At the same time about 80% of the human race is religious in one form or another. There will be some special cases and some overlap. For example people that know their religious beliefs are irrational and they are just using them to make themselves feel better. But overall, these are the two pools of people we have.

      Now, add that fact-checking AI is typica

      • by HiThere ( 15173 )

        Nobody is an "independent thinker" on every topic. Wherever one is an expert, one tends to be an "independent thinker" in that domain. Where you don't feel knowledgeable, you tend to accept an authoritative source...possibly after doing some amount of checking to see whether others think it reliable.

        • by Calydor ( 739835 )

          Nobody has the necessary time or energy to be an independent thinker on every topic.

          Let's say, for example, that you're vegan. Your local supermarket advertises a pizza as being vegan. Are you going to accept this as fact, or are you going to spend your time doing research to verify that the pizza really is vegan, no cross-contaminants anywhere in the production chain, etc.?

          Or take Linux. All the source code is available to read, but do you? Do you really? Did you read the ENTIRE Linux source code before in

        • Thatâs not an independent thinker. Thatâ(TM)s someone who routinely doubts everything. But as Henri Poincaré already observed more than 100 years ago: To doubt everything and to believe everything are considered two equally convenient strategies, both of which relieve us of the necessity of thinking or reflection. (And I know, a witty saying proves nothing.)
        • by gweihir ( 88907 )

          And fail. This has nothing to do with expertise or education in an area. It has everything to do with how a person approaches a question.

        • by dvice ( 6309704 )

          I think the better term to independent thinker is critical thinking, where it is not about doubting others, but doubting yourself. Do I have enough information about this subject? Could I be wrong? Are my arguments flawed?

          • by HiThere ( 15173 )

            But you've got to do both. Doubting oneself is "critical thinking". Doubting other sources of authority is "independent thinking".

            The thing is, nobody has enough expertise to be an independent thinker in every area. So you essentially MUST delegate your ideas in some areas (variable between people) to external authorities. At which point what you "believe" depends on which authorities you choose.

            A related question is "how firm is that belief?". This also tends to vary wildly with little apparent (to me)

  • Normal (Score:5, Informative)

    by nospam007 ( 722110 ) * on Saturday April 04, 2026 @10:25AM (#66076920)

    50% of us have an IQ of under 100.

    • by gweihir ( 88907 )

      Ability to fact-check seems to not be or only weakly connected to intelligence. What you need is to want to know. Most people do not want to know.

    • by dfghjk ( 711126 )

      With the rise of MAGA, it seems there is more than that. SuperKendallism is pervasive.

    • 50% of us have an IQ of under 100.

      Kind sir or madam, may I remind you that this is slashdot.

    • This is not about intelligence, it is about laziness and the willingness to remain lazy even when you know something is wrong.

      More people are willing to turn their brain off than you think, especially when they think no one will notice or care (i.e.), no consequences (yet).

      - 'mailing it in' since birth
    • Certainly those who make that claim. A bell curve looks like a bell, not like a triangle.

      • Modern IQ scores are scaled to have a mean of 100 and sd of 15 (see ref 3 on https://en.wikipedia.org/wiki/... [wikipedia.org]). For the whole population the distribution is close enough to symmetric, so when cast to integers the median is also pretty much 100. Thus, the OP. No triangles involved.

        • Still, you got it wrong, because the curve around 100 is flat, and given that the IQ is rounded to a whole number, a significant part of the population has an IQ of 100 (or 99.5 to 100.5). Thatâs what the curve vs. triangle was aimed at. Add to the fact that individual results can vary a lot, depending on the exact Series and the current State of mind of the one tested, results between 95 and 105 are well within the IQ-100 group.
  • by dark.nebulae ( 3950923 ) on Saturday April 04, 2026 @10:29AM (#66076924)

    The interesting question isnâ(TM)t that 73% of people accept faulty AI reasoningâ¦

    Itâ(TM)s which 73%.

    What happens to the segment of the population that already struggles with critical thinking? The folks whoâ(TM)ve historically bought into things like flat earth, QAnon, miracle cures, etc.

    Those groups didnâ(TM)t suddenly appear because of AI, they existed long before it. They already demonstrate a tendency to accept authoritative-sounding information without much scrutiny.

    So what changes now?

    If anything, AI just becomes another âoeauthorityâ to outsource thinking to. And per this study, those already predisposed to see AI as authoritative are the most likely to be led astray.

    Sure, today if you ask Claude or ChatGPT about flat earth, youâ(TM)ll get a correct answer. But we all know these systems can be nudged, reframed, or persistence-prompted into saying almost anything.

    And hereâ(TM)s the real problem:

    If someone didnâ(TM)t question YouTube videos, Facebook posts, or random blogs⦠why would they suddenly start questioning AI?

    They wonâ(TM)t.

    So the outcome isnâ(TM)t that AI âoefixesâ bad thinking. It likely just amplifies whatever thinking was already there.

    For people with strong critical thinking skills, AI is a tool.

    For people without it, itâ(TM)s just a more convincing storyteller.

    That seems like the real risk.

    • The interesting question isnâ(TM)t that 73% of people accept faulty AI reasoningâ¦

      Itâ(TM)s which 73%.

      What happens to the segment of the population that already struggles with critical thinking? The folks whoâ(TM)ve historically bought into things like flat earth, QAnon, miracle cures, etc.

      Those groups didnâ(TM)t suddenly appear because of AI, they existed long before it. They already demonstrate a tendency to accept authoritative-sounding information without much scrutiny.

      So what changes now?

      We name the new AI "PT Barnum" and turn it up to 11 via a Spinal Tap.

      Then we fire up the industrial popcorn machine, and remember the good ol' days.

      Good luck to anyone born after nineteen-hundred-the-fuck-off-my-lawn.

  • Critical Thinking (Score:5, Insightful)

    by Tomahawk ( 1343 ) on Saturday April 04, 2026 @10:29AM (#66076926) Homepage
    is something that just isn't taught properly, if at all, in schools. We see the lack of it everywhere. So it's understandable that many are offloading this to something else because they just don't know how to do it themselves. Laziness is also a factor, yes. But inability, I feel, is the biggest factor here.
    • is something that just isn't taught properly, if at all, in schools.

      Schools in the US generally stop emphasizing the teaching critical thinking by about the eight grade. There are a number of contributing reasons for that (some blame curriculum that are focused more on compliance and passing standardized tests than learning how to think). As individuals generally are considered to still be learning how to think and reason until their early 20s, the lack of teaching critical thinking well into High School leaves a significant part of the population under prepared for under

    • by gweihir ( 88907 )

      Critical thinking or independent thinking is something most people do not do and do not like doing because they would learnt things that frighten them, for example how little they understand the world. Reasonable estimates put independent thinkers at around 10....15% of the population (goes up to around 20% if you add those that can be convinced by rational argument). The rest prefers a convenient illusion or lie to actual insight.

      I do not think this is connected to education or intelligence anymore. I thin

    • Laziness is also a factor, yes. But inability, I feel, is the biggest factor here.

      I always phrase it as “some of us need to roll a 20 to think critically.”

    • is something that just isn't taught properly, if at all, in schools. We see the lack of it everywhere.

      So it's understandable that many are offloading this to something else because they just don't know how to do it themselves.

      Laziness is also a factor, yes. But inability, I feel, is the biggest factor here.

      People able to critically think are far harder to control than those who can't. The powers that be can't have that, so it's no wonder that the "dumbing down" of Americans has been in place for several years now.

  • I was working on a little Python/Django project for myself, fun to code it but also useful to me.

    I used it to try out (free tier) of an AI in the IDE.
    It did a great job, although its solutions are overcomplicated.
    But, worse, I didn't try and understand what it wrote and now I have to read through all the project code to be able to work on it myself again.
    And I don't feel like doing that.
    Fun is writing my own code, *work* is reading someone else's.

    I have surrendered my project to the AI.
  • by BrendaEM ( 871664 ) on Saturday April 04, 2026 @10:59AM (#66077004) Homepage
    It reads like, "I think my office chair is sentient."
    • The Computer is your friend. The Computer is crazy. The Computer wants to make you happy. This will drive you crazy.
  • Exercise (Score:5, Insightful)

    by dskoll ( 99328 ) on Saturday April 04, 2026 @11:09AM (#66077024) Homepage

    Imagine if a bunch of tech bros said: "Hey, you don't need exercise. It's totally fine if your muscles atrophy. After all, we have technology to move you around and it can do so much more quickly than your muscles ever could!" We'd laugh them out of town.

    Well, guess what? If you don't exercise your brain, it atrophies. If you outsource your thinking, you eventually become unable to think.

  • If you are good at using tools, you will be good at using AI when you need it. If you made stupid decisions before, you will still make stupid decisions using AI to back them up.
    • This, exactly. People are assuming that correlation equals causation here. IQ and critical thinking skills, as well as reading comprehension ability, have been on the decline for quite some time. AI is actually much better at getting things right than the average person who uses google to feed their confirmation bias and find echo chambers in which they can bask. I'm not going to post a bunch of links, because anyone can use the prompt "studies showing IQ and critical thinking in decline" in your "favorite"
  • Kind of predictable, that result. I mean, something like 80% of all people do routinely not fact-check when trying (and usually failing) to think for themselves, why would they suddenly start to fact-check when using AI? On top of that, fact-checking is a skill that needs to be practiced to get good at it. When you never do it, you suck at it and nothing can fix that except starting to fact-check things. But then inconvenient things start to intrude, like all your friends not being that smart either and yo

  • People giving up their thinking abilities is nothing new.

  • Learning to use new, immature tech is inherently problematic and we never get it right at first
    The tech will improve and out way of using it will adapt

    • The problem isn't the tools; it is the people. When AI is better human critical thinking and deductive reasoning capabilities will decline further, not magically return to a higher measure.
  • I was playing Elite Dangerous a couple of weeks ago and decided to ask Copilot to analyze my Exploration Mandalay just to see what it would say. It said choosing the Mandalay was an unusual choice for an Exploration ship. What? It has one of the longest jump ranges in the game!

       

  • This started in the 90's; being an idiot was suddenly okay. Being a troll was okay. This is the reason I go to bars to have a chat.
  • The 2nd group is the vast majority of people in everyday life who delegate their thinking to the media or the mob.

  • Is it a coincidence that every AI advocate I know was pitching bitcoin and wouldn't shut up about it 8 years ago? Did they "surrender" their cognitive abilities or never have them in the first place? I work for a large company that has gone all-in on AI. We've used ML models for decades legit reasons in our products for decades. We've given every programmer a generous amount of LLM AI time with all the major vendors. About 10% of the AI users are religious advocates...meaning that they won't shut up a
  • Sure there are down sides but if knuckle draggers can ask AI and go with that the odds are they'll come across so much smarter than they are!
  • All of this makes me remember a short story reading assignment in the 5th grade. It was about kids growing up in a society where machines did all of the intellectual work. To them, writing was 'squiggles'. They managed to disable a filter on their "bard" (a story teller for children) and had it tell them a tale of machines ruling over Man.

    Nobody expects prophesy from a 5th grade reading assignment.

  • Is this the purpose of war? To get rid of the dumbest and most violent people?
  • by mileshigh ( 963980 ) on Saturday April 04, 2026 @07:46PM (#66077702)

    I'm a careful coder to the point of paranoia, as you'd expect for someone who came up writing life-critical software.

    Yet, I keep falling for subtly-imperfect "helpful" AI suggestions for low-level, supposedly-simple code. Because humans are inherently susceptible to being led astray, lazy at some level. Because it's hard to un-see things that seem to make sense.

  • by corporate zombie ( 218482 ) on Saturday April 04, 2026 @08:33PM (#66077750)

    This is interesting but is it really surprising. Fluent, confident presentations are a go-to tool for any one/thing looking to influence someone. Humans have been doing it since language was invented. Probably even before that. That we've got LLMs that communicate through natural language and this still holds isn't all that surprising.

Tomorrow's computers some time next month. -- DEC

Working...