'Cognitive Surrender' Leads AI Users To Abandon Logical Thinking, Research Finds (arstechnica.com) 137
An anonymous reader quotes a report from Ars Technica: When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine. Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in "cognitive surrender" to AI's seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.
Overall, across 1,372 participants and over 9,500 individual trials, the researchers found subjects were willing to accept faulty AI reasoning a whopping 73.2 percent of the time, while only overruling it 19.7 percent of the time. The researchers say this "demonstrate[s] that people readily incorporate AI-generated outputs into their decision-making processes, often with minimal friction or skepticism." In general, "fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation," they write. These kinds of effects weren't uniform across all test subjects, though. Those who scored highly on separate measures of so-called fluid IQ were less likely to rely on the AI for help and were more likely to overrule a faulty AI when it was consulted. Those predisposed to see AI as authoritative in a survey, on the other hand, were much more likely to be led astray by faulty AI-provided answers.
Despite the results, though, the researchers point out that "cognitive surrender is not inherently irrational." While relying on an LLM that's wrong half the time (as in these experiments) has obvious downsides, a "statistically superior system" could plausibly give better-than-human results in domains such as "probabilistic settings, risk assessment, or extensive data," the researchers suggest. "As reliance increases, performance tracks AI quality," the researchers write, "rising when accurate and falling when faulty, illustrating the promises of superintelligence and exposing a structural vulnerability of cognitive surrender." In other words, letting an AI do your reasoning means your reasoning is only ever going to be as good as that AI system. As always, let the prompter beware.
Overall, across 1,372 participants and over 9,500 individual trials, the researchers found subjects were willing to accept faulty AI reasoning a whopping 73.2 percent of the time, while only overruling it 19.7 percent of the time. The researchers say this "demonstrate[s] that people readily incorporate AI-generated outputs into their decision-making processes, often with minimal friction or skepticism." In general, "fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation," they write. These kinds of effects weren't uniform across all test subjects, though. Those who scored highly on separate measures of so-called fluid IQ were less likely to rely on the AI for help and were more likely to overrule a faulty AI when it was consulted. Those predisposed to see AI as authoritative in a survey, on the other hand, were much more likely to be led astray by faulty AI-provided answers.
Despite the results, though, the researchers point out that "cognitive surrender is not inherently irrational." While relying on an LLM that's wrong half the time (as in these experiments) has obvious downsides, a "statistically superior system" could plausibly give better-than-human results in domains such as "probabilistic settings, risk assessment, or extensive data," the researchers suggest. "As reliance increases, performance tracks AI quality," the researchers write, "rising when accurate and falling when faulty, illustrating the promises of superintelligence and exposing a structural vulnerability of cognitive surrender." In other words, letting an AI do your reasoning means your reasoning is only ever going to be as good as that AI system. As always, let the prompter beware.
Oh Brave New World with such people in it (Score:5, Insightful)
"On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine"
I've run into these people, they're the worst. It was bad enough dealing with people whose mindset was 'If I cant find it on google then it doesn't exist,' and it seems these people have moved into AI and gotten dumber but think they're even smarter.
Re: Oh Brave New World with such people in it (Score:5, Funny)
There's already a name for this phenomenon. MAGA
Re: Oh Brave New World with such people in it (Score:5, Informative)
You’re not wrong. Remember when they kept saying Kamala would start a war?
Now the orange tub of shit started one himself and it’s totally different and necessary. They also all of a sudden care about the people of Iran.
Re: (Score:2)
You’re not wrong. Remember when they kept saying Kamala would start a war?
Now the orange tub of shit started one himself and it’s totally different and necessary. They also all of a sudden care about the people of Iran.
I figured out years ago that what the far right claims the other side is going to do (or doing) is exactly what they intend to do.
Re: (Score:2)
There's already a name for this phenomenon. MAGA
This comment is modded both funny and troll. The true mod should be insightful. However, it's not just MAGA adherents but also people on the left and even apolitical people. Many people surrender their skepticism in politics, science, history, and particularly economics. This is why social media and marketing are so effective, and this phenomenon exited way before AI was a thing.
Re: Oh Brave New World with such people in it (Score:4, Interesting)
Re: (Score:2)
Re: (Score:2)
EXACTLY! Do NOT believe the conspiracy theorists! The laptop was a Russian plant, the Steele Report is gospel, the lab leak theory has zero credibility, and Smollet was attacked by MAGA.
Re: (Score:2)
Re: (Score:2)
Never forget! Life was just peachy under Biden! Thank you for your honesty!
Re: (Score:2)
Re: (Score:2)
Correct: compared to this, life under Biden was Heaven on Earth
Obviously! Any smart NYT reader knows that!
There was no inflation, the fascist police were defunded, Cuba’s political prisoners were happily confined, Maduro was still in charge of a socialist paradise, people were free to judge by identity instead of character, millions of Afghan women were put where democrats thought was best for them, Putin was thoughtfully encouraged to take Ukraine, and, most importantly, the borders were sealed shut.
Plus tens of billions of unaudited cash was pouring into democr
Re: (Score:2)
There was no inflation
Biden inherited Trump's economy. Of course, inflation was high before he got it under control. And now that we're back under Trump's economy, how's that inflation rate, princess?
the fascist police were defunded
They were? When?
Cuba's political prisoners were happily confined
What?
Maduro was still in charge of a socialist paradise
Sorry, didn't realize we're the world police
people were free to judge by identity instead of character
This whole thread is you judging people by their identity, princess.
millions of Afghan women were put where democrats thought was best for them
Tell me again who negotiated the release of 5000 Taliban prisoners and the withdrawal date. Who was it again who invited the Taliban to camp david? Was Biden supposed to direct our military to recaptur
Re: Oh Brave New World with such people in (Score:2)
>> Plus tens of billions of unaudited cash was pouring into democrat aligned pockets: autism centers without any patients, school lunch programs without any kids, important DEI research, empty hospices, a high speed railway to nowhere, lucrative homeless programs that solved nothing, and USAID sinecures.
> none of that happened, wtf are you talking about
Sadly, many folks pin their hopes and dreams on a colorful facade of shrill smug ignorance.
And even more sadly, some actively paint the facade.
It
Re: (Score:2)
Sadly, many folks pin their hopes and dreams on a colorful facade of shrill smug ignorance.
Correct. Many folks like you are stuck smugly and proudly ignoring the price of gas, groceries...everything. It must be exhausting.
Re: (Score:2)
Notice apparently’s tactics? Deflecting from the mass corruption that’s lining his pockets to decontextualized nonsense? He’s taking advantage of the fact that progressives have a mile wide blind spot. Namely, progressives, just like conservatives, fully recognize their take is biased to one side on individual topics - but, unlike conservatives, fail to see when their bias is systematically skewed across a wide variety of aspects. This is particularly notable in a variety of ways:
- Progres
Re: (Score:2)
The list is “opinion”? No they’re all FACTS!
Don’t you read the New York Times?
Or are you on the side of uneducated deplorables?
Re: (Score:2)
Re: Oh Brave New World with such people in it (Score:2)
With AI, it is now easier than ever to demonstrate why you should not rely on basically "digital hearsay".
Re: (Score:2)
They probably have not gotten dumber, but they definitely think they are smarter and finally have reliable truth at their disposal.
The thing is, these people are the majority (!) of the human race at around 80%. The reason actually mentally capable people see them less often is filter effects. But think of some random relatives, which you probably have minimized or cut contact with.
Re: AI are the new compilers (Score:2)
Another word for stupidity. (Score:5, Insightful)
I think what is really going on is that is not 'fluid IQ', but regular, normal "IQ".
That is, stupid people either do not realize the AI is wrong, or more likely, they are so used to being corrected by more intelligent people that they just assume the AI must be smarter than they are and do not challenge it.
I can also see a small number of submissive/shy/apathetic people just accepting the wrong information and thinking it is not worth fixing.
This kind of thing gets me so mad that I would never just accept that.
Fluid versus crystallized (Score:3)
I think what is really going on is that is not 'fluid IQ', but regular, normal "IQ".
"Fluid" intelligence is the ability to think, reason, solve problems, and learn things. "Crystallized" intelligence is your amassed knowledge.
These are technical terms used in the literature.
Intelligence is nature's guess as to how complex your environment will be... but there's an out. People with low fluid intelligence have to work harder to understand things, but if they put in the work they can amass a body of knowledge that rivals that of people with high fluid intelligence.
And of course, lots of peopl
Re: (Score:2)
"Fluid IQ" is just how much of your IQ you actually use. The term was probably invented to not have to tell high IQ people that are not independent thinkers (quite a lot, probably a majority) that they are effectively pretty dump and mentally incapable.
Re: (Score:2)
I don't think it's directly related to IQ. I also don't think it's restricted to chatbots. A lot of people are willing to accept the opinion of any authoritative source that they've accepted. Think religion or political party. Once they accept it, they stop questioning it's proclamations.
Note that this also applied to those who accept the proclamations of scientists or compilers. Once you accept an authoritative source, you pretty much stop questioning it. It's been multiple decades since I really arg
my AI posted this comment (Score:4, Informative)
... So I don't have to. I assume it's correct.
Re: (Score:3)
My AI says there aren't enough em dashes, so you're probably both wrong.
New religion (Score:3, Interesting)
I would really like to see a study trying to correlate being religious to believing whatever the AI tells you. I suspect there's a strong overlap but that's just a gut feeling; I'd love to see it actually tested.
Re:New religion (Score:4, Interesting)
That would indeed be an interesting study.
Religions generally accept wisdom from sacred texts. (Yes, I know there are exceptions.) So one would presume that those who are ready to accept information on the authority of sacred texts would accept it from an AI that is perceived as an authority.
On the other hand, those same religious people could recognize that AI is distinct from their religious texts, and apply a different standard to it.
Re: (Score:2)
"Religions generally accept wisdom from sacred texts."
This is false. Religions CREATE privileged texts, which they call "sacred texts" or scriptures, which contain stories that are fabricated. Religions do not "accept wisdom" from these created texts because religions create those texts.
Now, parishioners could be said to "generally accept wisdom from sacred texts." Perhaps that is what you meant. Religions are a mechanism to control people, scripture is a tool that is used.
Personally, I think the entire
Re: (Score:2)
Now, parishioners could be said to "generally accept wisdom from sacred texts." Perhaps that is what you meant. Religions are a mechanism to control people, scripture is a tool that is used.
Yes, that is in fact what I meant. Thanks for clarifying.
Re:New religion (Score:5, Interesting)
"Thinking about God increases acceptance of artificial intelligence in decision-making"
https://pmc.ncbi.nlm.nih.gov/a... [nih.gov]
Re: (Score:2)
Well whaddya know.
Thanks, an interesting read.
Re: (Score:2)
Oblig joke: Computer scientists were putting the finishing touches on their first fully intelligent machine. Once booted up, they decided to test it with deeply philosophical question: "Is there a God?"
"There is now."
Re: (Score:2)
I strongly have to disagree, albeit only on a personal level. I believe in God, and you can make all the fun of that you want.
But using AI for decision making? Not even remotely. I can try to tell myself that AI in and of itself is just a tool. And as a tool I've used it. It proved useful for translating resource strings into another language and for looking up some stuff like how to use the fmt lib in c++.
But seeing how in just the last two years AI-generated content took over almost everything (youtube, s
Re: (Score:2)
There is one thing: About 10-15% of the population are independent thinkers and about 20-25% (including the former) can be convinced by rational argument. At the same time about 80% of the human race is religious in one form or another. There will be some special cases and some overlap. For example people that know their religious beliefs are irrational and they are just using them to make themselves feel better. But overall, these are the two pools of people we have.
Now, add that fact-checking AI is typica
Re: (Score:2)
Nobody is an "independent thinker" on every topic. Wherever one is an expert, one tends to be an "independent thinker" in that domain. Where you don't feel knowledgeable, you tend to accept an authoritative source...possibly after doing some amount of checking to see whether others think it reliable.
Re: (Score:3)
Nobody has the necessary time or energy to be an independent thinker on every topic.
Let's say, for example, that you're vegan. Your local supermarket advertises a pizza as being vegan. Are you going to accept this as fact, or are you going to spend your time doing research to verify that the pizza really is vegan, no cross-contaminants anywhere in the production chain, etc.?
Or take Linux. All the source code is available to read, but do you? Do you really? Did you read the ENTIRE Linux source code before in
Re: New religion (Score:2)
Re: (Score:2)
And fail. This has nothing to do with expertise or education in an area. It has everything to do with how a person approaches a question.
Re: (Score:2)
I think the better term to independent thinker is critical thinking, where it is not about doubting others, but doubting yourself. Do I have enough information about this subject? Could I be wrong? Are my arguments flawed?
Re: (Score:2)
But you've got to do both. Doubting oneself is "critical thinking". Doubting other sources of authority is "independent thinking".
The thing is, nobody has enough expertise to be an independent thinker in every area. So you essentially MUST delegate your ideas in some areas (variable between people) to external authorities. At which point what you "believe" depends on which authorities you choose.
A related question is "how firm is that belief?". This also tends to vary wildly with little apparent (to me)
Re:New religion (Score:5, Informative)
Re: (Score:2)
You just exposed yourself as a stupid person. Because all reliable research says exactly the opposite: The more conservative, the more mentally incapable. And that is not even me trying to insult you. That is just a solid fact.
Normal (Score:5, Informative)
50% of us have an IQ of under 100.
Re: (Score:2)
Ability to fact-check seems to not be or only weakly connected to intelligence. What you need is to want to know. Most people do not want to know.
Re: (Score:3)
With the rise of MAGA, it seems there is more than that. SuperKendallism is pervasive.
Re: (Score:2)
50% of us have an IQ of under 100.
Kind sir or madam, may I remind you that this is slashdot.
Re: (Score:2)
More people are willing to turn their brain off than you think, especially when they think no one will notice or care (i.e.), no consequences (yet).
- 'mailing it in' since birth
Re: (Score:2)
Certainly those who make that claim. A bell curve looks like a bell, not like a triangle.
Re: (Score:2)
Modern IQ scores are scaled to have a mean of 100 and sd of 15 (see ref 3 on https://en.wikipedia.org/wiki/... [wikipedia.org]). For the whole population the distribution is close enough to symmetric, so when cast to integers the median is also pretty much 100. Thus, the OP. No triangles involved.
Re: Normal (Score:2)
Re: (Score:2)
Or is it 49%? If 100 is adjusted to be average across a target population, there's going to be a lot of people right on the mark... And it's not linear distribution, so there's more people scored 100 than there are 148 or 71... I should ask ChatGPT... ChatGPT said that 50% are above 100, 50% are below 100. Yeah, we're screwed.
And ChatGPT is not necessarily correct here. Neither was George Carlin when he said:
Think of how stupid the average person is, and realize half of them are stupider than that.
ChatGPT and Carlin are confusing mean with median. The latter divides a population 50-50, the former not necessarily so.
Re: Normal (Score:2)
Re: (Score:2)
And it's not linear distribution
Are you certain that there's no one to the left of zero? I'd beg to differ.
Re: (Score:2)
You'll rightly be modded Troll soon enough, but for the record: research has shown no significant difference in the average intelligence of male and female human beings.
However, there is some evidence that genders are better on average at certain kinds of tasks: females on average are better at verbal and memory-intensive tasks, whereas makes on average are stronger with spatial and mathematical reasoning. But of course, the two populations overlap significantly: we have plenty of outstanding female scienti
Re: (Score:2)
The topic was intelligence. Not your false anti-female invective. Take it somewhere else, it's not welcome here.
Re: (Score:2)
No conversation cannot be improved with more bigotry, right MAGA?
Re: (Score:2)
Apparently in the UK it's 51.8% due to a certain vote, some people say.:-)
What happens to the masses? (Score:5, Interesting)
The interesting question isnâ(TM)t that 73% of people accept faulty AI reasoningâ¦
Itâ(TM)s which 73%.
What happens to the segment of the population that already struggles with critical thinking? The folks whoâ(TM)ve historically bought into things like flat earth, QAnon, miracle cures, etc.
Those groups didnâ(TM)t suddenly appear because of AI, they existed long before it. They already demonstrate a tendency to accept authoritative-sounding information without much scrutiny.
So what changes now?
If anything, AI just becomes another âoeauthorityâ to outsource thinking to. And per this study, those already predisposed to see AI as authoritative are the most likely to be led astray.
Sure, today if you ask Claude or ChatGPT about flat earth, youâ(TM)ll get a correct answer. But we all know these systems can be nudged, reframed, or persistence-prompted into saying almost anything.
And hereâ(TM)s the real problem:
If someone didnâ(TM)t question YouTube videos, Facebook posts, or random blogs⦠why would they suddenly start questioning AI?
They wonâ(TM)t.
So the outcome isnâ(TM)t that AI âoefixesâ bad thinking. It likely just amplifies whatever thinking was already there.
For people with strong critical thinking skills, AI is a tool.
For people without it, itâ(TM)s just a more convincing storyteller.
That seems like the real risk.
PT Barnum Reincarnate, at your service. (Score:2)
The interesting question isnâ(TM)t that 73% of people accept faulty AI reasoningâ¦
Itâ(TM)s which 73%.
What happens to the segment of the population that already struggles with critical thinking? The folks whoâ(TM)ve historically bought into things like flat earth, QAnon, miracle cures, etc.
Those groups didnâ(TM)t suddenly appear because of AI, they existed long before it. They already demonstrate a tendency to accept authoritative-sounding information without much scrutiny.
So what changes now?
We name the new AI "PT Barnum" and turn it up to 11 via a Spinal Tap.
Then we fire up the industrial popcorn machine, and remember the good ol' days.
Good luck to anyone born after nineteen-hundred-the-fuck-off-my-lawn.
Critical Thinking (Score:5, Insightful)
Re: (Score:2)
is something that just isn't taught properly, if at all, in schools.
Schools in the US generally stop emphasizing the teaching critical thinking by about the eight grade. There are a number of contributing reasons for that (some blame curriculum that are focused more on compliance and passing standardized tests than learning how to think). As individuals generally are considered to still be learning how to think and reason until their early 20s, the lack of teaching critical thinking well into High School leaves a significant part of the population under prepared for under
Re: (Score:3)
Critical thinking or independent thinking is something most people do not do and do not like doing because they would learnt things that frighten them, for example how little they understand the world. Reasonable estimates put independent thinkers at around 10....15% of the population (goes up to around 20% if you add those that can be convinced by rational argument). The rest prefers a convenient illusion or lie to actual insight.
I do not think this is connected to education or intelligence anymore. I thin
Re: (Score:2)
Laziness is also a factor, yes. But inability, I feel, is the biggest factor here.
I always phrase it as “some of us need to roll a 20 to think critically.”
Re: (Score:2)
is something that just isn't taught properly, if at all, in schools. We see the lack of it everywhere.
So it's understandable that many are offloading this to something else because they just don't know how to do it themselves.
Laziness is also a factor, yes. But inability, I feel, is the biggest factor here.
People able to critically think are far harder to control than those who can't. The powers that be can't have that, so it's no wonder that the "dumbing down" of Americans has been in place for several years now.
Just had some of this (Score:2)
I used it to try out (free tier) of an AI in the IDE.
It did a great job, although its solutions are overcomplicated.
But, worse, I didn't try and understand what it wrote and now I have to read through all the project code to be able to work on it myself again.
And I don't feel like doing that.
Fun is writing my own code, *work* is reading someone else's.
I have surrendered my project to the AI.
Mentally ill people using human terms for AI (Score:5, Informative)
Re: (Score:3)
Exercise (Score:5, Insightful)
Imagine if a bunch of tech bros said: "Hey, you don't need exercise. It's totally fine if your muscles atrophy. After all, we have technology to move you around and it can do so much more quickly than your muscles ever could!" We'd laugh them out of town.
Well, guess what? If you don't exercise your brain, it atrophies. If you outsource your thinking, you eventually become unable to think.
Re:Exercise (Score:4, Insightful)
Your mistake here is assuming most people think without AI at their disposal. Usually they just repeat something they have heard and liked and then convince themselves they have had a great insight. This approach is widespread.
Re: (Score:3)
It's not just widespread, it's universal. What varies from person to person is the domain that they apply thinking to, and how they validate the authority they choose to trust.
Re:Exercise (Score:4, Interesting)
No. There is a group, the "independent thinkers", that do not use this crap. They are small, admittedly, but they do exists. The usual estimate is 10...15% of the population and education seems to make no difference.
Re: (Score:2)
Stack Overflow
Re: (Score:2)
Behold, some tech bros:
https://www.historyhit.com/fac... [historyhit.com]
https://en.wikipedia.org/wiki/... [wikipedia.org]
https://en.wikipedia.org/wiki/... [wikipedia.org]
https://en.wikipedia.org/wiki/... [wikipedia.org]
They might have launghed then, but they're not laughing now.
Ironically, tech bros are into fitness now (Score:2)
Imagine if a bunch of tech bros said: "Hey, you don't need exercise. It's totally fine if your muscles atrophy. After all, we have technology to move you around and it can do so much more quickly than your muscles ever could!" We'd laugh them out of town.
Well, guess what? If you don't exercise your brain, it atrophies. If you outsource your thinking, you eventually become unable to think.
I've been into working out since I was a child...was born with the obesity gene and have to workout hard to be less of a fatty. Now all the execs are into biohacking, fitness, MMA, etc and won't shut up about it...quoting Huberman, Attia, and everyone else on Rogan. The most obnoxious is Pavel Tsatsouline...if another annoying exec talks to me about kettlebells, I'll fucking throw one at him.
It's fucking depressing...these guys used to see the hope and promise in technology and devices and making the
It's a new tool (Score:2)
Re: (Score:2)
Dumb people remain dumb when using AI (Score:2)
Kind of predictable, that result. I mean, something like 80% of all people do routinely not fact-check when trying (and usually failing) to think for themselves, why would they suddenly start to fact-check when using AI? On top of that, fact-checking is a skill that needs to be practiced to get good at it. When you never do it, you suck at it and nothing can fix that except starting to fact-check things. But then inconvenient things start to intrude, like all your friends not being that smart either and yo
That's Just Fox News (Score:2)
People giving up their thinking abilities is nothing new.
As expected (Score:2)
Learning to use new, immature tech is inherently problematic and we never get it right at first
The tech will improve and out way of using it will adapt
Re: (Score:2)
Elite (Score:2)
I was playing Elite Dangerous a couple of weeks ago and decided to ask Copilot to analyze my Exploration Mandalay just to see what it would say. It said choosing the Mandalay was an unusual choice for an Exploration ship. What? It has one of the longest jump ranges in the game!
Re: (Score:2)
Re: (Score:2)
You would lose that bet because it is definitely one of the top exploration ships.
https://elite-dangerous.fandom... [fandom.com]
Re: Elite (Score:2)
Ha! (Score:2)
2nd group (Score:2)
The 2nd group is the vast majority of people in everyday life who delegate their thinking to the media or the mob.
Bitcoin fans became AI evangelists...coincidence? (Score:2)
Really happy about this (Score:2)
Wow, old memory (Score:2)
All of this makes me remember a short story reading assignment in the 5th grade. It was about kids growing up in a society where machines did all of the intellectual work. To them, writing was 'squiggles'. They managed to disable a filter on their "bard" (a story teller for children) and had it tell them a tale of machines ruling over Man.
Nobody expects prophesy from a 5th grade reading assignment.
Re: (Score:2)
Purpose of war (Score:2)
Re: (Score:2)
Everyone falls for it at some level (Score:4, Insightful)
I'm a careful coder to the point of paranoia, as you'd expect for someone who came up writing life-critical software.
Yet, I keep falling for subtly-imperfect "helpful" AI suggestions for low-level, supposedly-simple code. Because humans are inherently susceptible to being led astray, lazy at some level. Because it's hard to un-see things that seem to make sense.
Yes, and... (Score:3)
This is interesting but is it really surprising. Fluent, confident presentations are a go-to tool for any one/thing looking to influence someone. Humans have been doing it since language was invented. Probably even before that. That we've got LLMs that communicate through natural language and this still holds isn't all that surprising.
Re: (Score:2)
Do you avoid random number generators just because they are not actually random? No, you just take that into account when using it. There are methods that allow you to get million answers from the AI in a sequence without a single mistake. ( https://www.youtube.com/watch?... [youtube.com] )
Re: (Score:2)
There are methods that allow you to get million answers from the AI in a sequence without a single mistake. ( https://www.youtube.com/watch?... [youtube.com] )
There are probably many methods to ensure that AI can do something accurately and correctly.
Ever wonder how many methods there are to manipulate and convince AI that it's wrong?
Today's AI tends to remind me of Google hacking 20+ years ago. I fear the early days can present challenges we haven't even thought of trying to curtail or control yet.
Re: (Score:2)
It seems to be a population stereotype with about 10-15% percent willing to fact check (and hence getting good at it due to experience) and around 25% in total that are open to rational argument. The rest just wants to feel good about themselves and any lie or illusion that does that is just fine with them.
It does not seem to really be connected to IQ either. It seems to be a fundamental personality defect that transcends ability to handle complexity. I mean, as long as you are smart enough to read, you can
Re: (Score:2)
This is not only true, it is the most important takeaway. AI has not created these two kinds of users, they have always existed.