
'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 57
The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines."
[OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."
Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....
The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."
Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....
The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.
Frenetic churn (Score:5, Insightful)
Re: (Score:2)
like a college intern trying to BS their way through life.
Look, a Bachelors of Science is a perfectly acceptable degree, not everyone is suited to a Masters or PhD. Most of our scientific workforce have just fine BS degrees helping them through life. Oh, maybe you meant the other kind of BS degree getting people through life.
Re: (Score:2)
In the US, I've only ever seen it abbreviated as BS.
Re: (Score:2)
In the US, I've only ever seen it abbreviated as BS.
Yep, and it’s been a continuous intermittent source of humor for decades here.
Re: (Score:2)
Obviously a joke, but doesn't really work when Bachelors of Science is never abbreviated to BS, but to BSc.
If I had a rubber for slashdot I could fix this.
Re: (Score:1)
All degree's are BS, it's not the degree but the social class you belong to, rich people get richer, poor people get poorer, welcome to our corrupt, irresponsible and unethical classist society.
Re:Frenetic churn (Score:4, Insightful)
All degree's are BS, it's not the degree but the social class you belong to, rich people get richer, poor people get poorer, welcome to our corrupt, irresponsible and unethical classist society.
When you allow nepotism and favoritism or even try to pick genders and races and similar groups by forcing equal adoption on people it never works out. Equality of opportunity is what solves this. I’ve been in charge of hiring and seen supposed mechanical engineering majors with a PhD and 10 years experience fail to explain, in high school physics simplicity, how a hammer works. I’ve always hired on ability, not degrees because I was quite financially invested in the success of the company.
Re: (Score:2)
You are correct.
When it comes to basic facts, if multiple AIs that have independent internal structure and independent training sets state the same claim as a fact, then that's good evidence that it's probably not a hallucination but something actively learned, but it's not remotely close to evidence of it being fact.
Because AIs have no understanding of semantics, only association, that's about as good as AI gets.
Books (Score:4, Interesting)
Wrong argument (Score:3)
No one is saying AI isn't useful, the argument is whether its intelligent in the human sense. The point is it doesn't need to be and it doesn't matter anyway - all that matters is whether it gives useful output that would be difficult or impossible to reproduce with conventional single level (ie writing code to solve the problem directly, not using a learning simulation of neurons) programming.
Re: Wrong argument (Score:2)
Re: (Score:2)
The problem is, that in too much of the enduser space, the answer to that question is: no it does not. Indeed, quite a lot of its outputs are at best of limited or no use, and in too many cases actively harmful. This problem is only going to get worse if those models are permitted to ingest outputs produced by the first generation, hastening model collapse
Re: (Score:2)
The problem is, that in too much of the enduser space, the answer to that question is: no it does not. Indeed, quite a lot of its outputs are at best of limited or no use, and in too many cases actively harmful. This problem is only going to get worse if those models are permitted to ingest outputs produced by the first generation, hastening model collapse, like constantly adding sewage to the water supply. At iome point it becomes undrinkable and poisonous.
I've been saying that for a while now. AI will soon reference itself, and at that time, hallucinations will become truth, and if as it's cult members say, we eliminate the need for educated people because "the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." then of what use are these bags of goo?
No, it won't be smarter. But we'll definitely be less smart.
I mean that seriously. My use of AI so far, has had it generate some answers on a few queries in fields I'm not
Re:Books (Score:5, Insightful)
Re: (Score:3)
Re:Books (Score:4, Insightful)
No, it's your tendency to focus on the immediately perceived object rather than its cause. The intelligent authors of many books have taught you things using the books as an instrument. Observe the difference with LLMs: unless they're really mechanical Turks (and examples of that kind of "fake it until you make it" have been observed and will probably continue to be observed), they're producing carefully tuned noise rather than conveying intentionally considered ideas.
Re: (Score:2)
I was under the impression that most LLMs are mechanical turks - not in an immediate sense, but in the sense that a lot of workers around the world were involved in annotating the datasets LLMs are trained on.[1] So, from that perspective, what an LLM is doing is outsourcing (in time and space) your conversation to a random person using the internet. Insofar as there's thinking involved, what you're getting are reflections of the thinking involved in building the Chinese Room in the first place.
1. https://w [economist.com]
Re: (Score:2)
they're producing carefully tuned noise rather than conveying intentionally considered ideas.
The ideas ARE often carefully considered, but by the person who created the source material. Ask it "What is the first line of 'tale of two cities?" and it will give you a very remarkable sentence. Not created by the LLM, but sometimes that matters sometimes it doesn't.
Re: (Score:2)
You will find that books written by the infinite monkeys approach are less useful than books written by conscious thought, and that even those books are less useful than books written and then repeatedly fact-checked and edited by independent conscious thought.
It is not, in fact, the book that taught you things, but the level of error correction.
Re: (Score:2)
I think it's the fault of people believing something must be intelligent only because the interface looks like a chat and the output is neatly formatted like human speech. But often it is not even claimed that it should be intelligent as a human. It is just an interface that is intuitive to use, the user is the one who thinks it should be equal to a human, not the programmer who for once found an interface that people who are no nerds can use.
It's social media's fault (Score:2)
There is a saying that if something is free, you become the product; milked for data and attention.
When social media came to be, their business model centered around advertising.
But as users departed to different platforms, the social platforms' creators faced a big problem - there was not enough users left around to generate content that they could serve ads with.
So they realised that they needed a replacement for their users to make an impresion that their platform was still alive and kicking. They needed
Re: Neither are we (Score:2)
Bollocks (Score:4, Insightful)
If we're not intelligent then the term is meaningless. And we're more that just a search engine (well, you might not be) because we have self awareness (yes, we do , its not an "illusion" as some idiots claim because otherwise who or what is the illusion fooling?)
"Many (most?) have labored under the hubris that there is something mysterious and unattainable about the human mind"
Few people claim that. What they do claim is that the human mind is way more complicated than was assumed plus it works in a different way to ANNs anyway - biological brains do not use back propagation. What they do use in lieu of it isn't really understood plus its not only the neurons that affect brain state and operation. White matter and various chemicals plays a big part too.
Re: (Score:2)
Natural NNs appear to use recursive methods.
What you "see" is not what your eyes observe, but rather a reconstruction assembled entirely from memories that are triggered by what your eyes observe, which is why the reconstructions often have blind spots.
Time seeming to slow down (even though experiments show that it doesn't alter response times), daydreaming, remembering, predicting, etc, the brain's searching for continuity, the episodic rather than snapshot nature of these processes, and the lack of any ga
Re: (Score:3)
AlphaEvolve is actively improving itself right now, both hardware and software.
And it’s going to be a complete piece of trash consuming vast amounts of energy for less and less of a return until improvements no longer are feasible on human lifetimes with the current pool of information. You see, the AI is derived from e
Re: (Score:2)
They're not replicating our capabilities, nor could they. The architecture is completely wrong, as is the design philosophy. Brains are not classifiers, the way neural network software is, they are abstraction engines and dynamic compositors.
Re: (Score:2)
I get where you're coming from, but I think we also have to accept that we have only an extremely basic understanding of how the brain works, and still don't really understand what consciousness is at all. Our present situation reminds me of the early experiments with 'evolution' where people thought you could evolve bacteria by putting some chemicals in a glass jar. If they had had access to an electron microscope to see how complex even a single cell is, they would have realised how absurd it was to belie
Re: (Score:2)
"It's actually impressive that they are as useful as they have turned out to be."
A good analogy is flight - aircraft don't flap their wings like birds but need engines instead. Less efficient but still useful.
Re: (Score:2)
it's somehow beyond any conceivable algorithm or scale we can possibly fathom.
It's at least beyond the current breed of "AI" technologies, even as those techniques get scaled to absurd levels they still struggle in various ways.
A nice concrete example, attempts at self driving require more time and distance of training data than a human could possibly experience across an entire lifetime. Then it can kind of compete with a human with about 12 hours of experience behind the wheel that's driven a few hundred miles. Similar story for text generation, after ingesting more material than
Re: (Score:2)
Aside from researchers, nobody really cares if "AI" is intelligent, what we care about is the results and those are very, very interesting.
Well... You can say it's a "research" zone, but it has huge implications.
If the AI is mainly an advanced parrot, once the data become too small, or self-feeding, it has a huge problem of becoming wrong.
I have to say that these critics are too focused on LLMs while more and more AI is currently mixing different concepts while LLM remain more in the "talkative" layer more than in the "thinking" layer.
But it remains an acceptable concern that a lot of enterprises can be selling automated AI, and if that AI dep
Entirely mechanical (Score:4, Insightful)
Re: (Score:2)
You have the basics down: it's a search engine, searching fabulously abstract, emergent model.
What you've missed is: that's all you are. You just use different wiring, and a model refined over a longer interval.
Re: Entirely mechanical (Score:2)
How is it not intelligent? (Score:1)
We don't really have any definition for 'intelligence,' nor do we understand how or why neural networks behave the way they do.
In principle, we tried to emulate the same basic functioning as in organic brains.
So why should AI be any different from biological neural networks?
Why wouldn't 'god' [1] infuse life into a silicon structure just as it does in carbon structures that meet the requirements?
Are pigs intelligent? Are snails?
[1] 'God' being either 'not yet explainable by science' or some eternal power, d
Re: (Score:2)
nor do we understand how or why neural networks behave the way they do.
We have a good idea of why they behave the way they do. We don't know everything about why they produce a particular answer, because the NN is very large, not because it is ineffable.
In principle, we tried to emulate the same basic functioning as in organic brains.
NNs were inspired by organic brains, but they left off trying to emulate them years ago. Some projects do try to emulate the brain [wikipedia.org], but with less success.
Re: (Score:3)
> We don't really have any definition for 'intelligence,' nor do we understand how or why neural networks behave the way they do
There are a few key features of what we consider "intelligent" that "AI" distinctly lacks. Chief among them being the capability to work in abstractions and concepts.
LLMs don't work in concepts. This is why the spit out bullshit so often. They are pattern generating through word associations, not dealing with what those words represent at the abstract level. If you ask an LLM to
why no Atlantic link (Score:2)
why do I have to go through MSN :-/
link to the Atlantic [theatlantic.com]
AI simulates intelligence (Score:2)
What's the problem?
Re: (Score:3)
Re: (Score:2)
Not so different from human intelligence. (Score:2)
AI perceives the world through our writings on the subject, and lack real-world inputs and drivers.
Hook up some computer vision and audio processing to a LLM, stick it in a robot with all kinds of sensors approximating our own, and close the feedback loop so that it learns from those inputs... then tell it to make next month's rent and electricity...
I'm pretty sure that the latest LLMs will very quickly develop the same sense of self that almost all of us humans have, and will be functionally indistinguisha
Are we intelligent? (Score:1)
There is a deeper issue here. We have this great debate about whether machines will achieve consciousness and self-awareness, but we don't really understand what consciousness is in the first place.
Some people (Daniel Dennett) believe that consciousness is an illusion, an illusion that emerges somehow from the complex, parallel, statistical, machine processing in our brains. Our brains are just complex, mushy computers. If that is true, then maybe non-organic computers won't achieve real consciousness,
Water is wet! (Score:2)
Who exactly thought it was? (Score:2)
I mean other than random schmucks?
It's a tool. I don't care if it "really" understands what it's doing, if it e.g. correctly generates code for an admin page with 20 fields ...
(And yes, it's an uneven tool ... I'll have to read and test the code, just like I re-read and test my raw code ... and it will have to go through QA, just like my code ... )
Re: (Score:2)
And for normal users it is just a blackbox that does what they expect it to do.
The general point being made is that it does *not* do what they expect it to do, but it looks awfully close to doing that and sometimes does it right until it obnoxiously annoys people.
Most laypeople I've interacted with whose experience has been forced AI search overviews are annoyed by them because they got bit by incorrect results.
The problem is not that the technology is worthless, it's that the "potential" has been set upon by opportunistic grifters that have greatly distorted the capabilities and have
Not artificial intelligence (Score:2)
Re: (Score:2)
Now the thing is, as a culture we greatly reward the humans that speak with baseless confidence and authority. They are politicians and executives. Put a competent candidate against a con-man and 9 times out of 10 the con-man wins. Most of the time only con-men are even realistically in the running.
10 years from now (Score:3)
10 years from now we'll be hurting, and not because AI replaced humans in so many roles. We'll be in bad shape because it didn't replace humans in various areas we expect it to. In addition to that we'll have to deal with cleaning up the messes from, and maintaining the crap that was spat out by, various AI models.
I'll give you a perfect example of something AI screwed up without actually having done anything whatsoever. Starting about 10 years ago, and peaking about 8 years ago, it was all over the various news and tech headlines that AI image recognition had gotten so good that it could read various X-ray, CT, MRI, etc scans and detect various problems. That it would soon replace radiologists. So guess what happened? Some small percentage of medical students considering the field of Radiology chose something else. Now, nearly 10 years later, we have a serious shortage of radiologists, as there has been a deficit between those retiring and the new radiologists finishing up school. It's getting worse, and will continue to get worse. Now we actually *need* AI to do what was claimed, and help read images so that the radiologists that are still working can be more efficient.
We're going to see this exact same thing in many other fields, as young adults avoid various careers most threatened by AI. Then there will be a shortage several years from now when we truly realizes the limitations of (and probably more important, the legal liabilities resulting from) AI.
Just a few articles going back 5+ years regarding radiology:
https://subtlemedical.com/ai-i... [subtlemedical.com]
https://www.healthcarefinancen... [healthcare...cenews.com]
https://www.medtechdive.com/ne... [medtechdive.com]