Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
AI

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 57

The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....

The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry

Comments Filter:
  • Frenetic churn (Score:5, Insightful)

    by sinkskinkshrieks ( 6952954 ) on Monday June 09, 2025 @03:48AM (#65436941)
    From the whole, utopian "we don't need software developers anymore" to "we really need software developers now because no one understands how to do anything more": from one extreme to another, which is incompatible with stable employment or investment by anyone. Perhaps instead of chasing hype with gusto extremism, folks should realize "AI" is basically glorified tab-completion prone to hallucinating incorrect results like a college intern trying to BS their way through life.
    • like a college intern trying to BS their way through life.

      Look, a Bachelors of Science is a perfectly acceptable degree, not everyone is suited to a Masters or PhD. Most of our scientific workforce have just fine BS degrees helping them through life. Oh, maybe you meant the other kind of BS degree getting people through life.

      • by Anonymous Coward

        All degree's are BS, it's not the degree but the social class you belong to, rich people get richer, poor people get poorer, welcome to our corrupt, irresponsible and unethical classist society.

        • Re:Frenetic churn (Score:4, Insightful)

          by burtosis ( 1124179 ) on Monday June 09, 2025 @07:08AM (#65437107)

          All degree's are BS, it's not the degree but the social class you belong to, rich people get richer, poor people get poorer, welcome to our corrupt, irresponsible and unethical classist society.

          When you allow nepotism and favoritism or even try to pick genders and races and similar groups by forcing equal adoption on people it never works out. Equality of opportunity is what solves this. I’ve been in charge of hiring and seen supposed mechanical engineering majors with a PhD and 10 years experience fail to explain, in high school physics simplicity, how a hammer works. I’ve always hired on ability, not degrees because I was quite financially invested in the success of the company.

    • by jd ( 1658 )

      You are correct.

      When it comes to basic facts, if multiple AIs that have independent internal structure and independent training sets state the same claim as a fact, then that's good evidence that it's probably not a hallucination but something actively learned, but it's not remotely close to evidence of it being fact.

      Because AIs have no understanding of semantics, only association, that's about as good as AI gets.

  • Books (Score:4, Interesting)

    by dpille ( 547949 ) on Monday June 09, 2025 @03:58AM (#65436951)
    Books aren't intelligent, either. I guess it's just my tendency to equate language with thought that has made me mistakenly believe books have taught me things and saved me time.
    • No one is saying AI isn't useful, the argument is whether its intelligent in the human sense. The point is it doesn't need to be and it doesn't matter anyway - all that matters is whether it gives useful output that would be difficult or impossible to reproduce with conventional single level (ie writing code to solve the problem directly, not using a learning simulation of neurons) programming.

      • Said like a person who isn't investing billions of dollars in it to replace employees.
      • ... all that matters is whether it gives useful output that would be difficult or impossible to reproduce with conventional single level (...) programming

        The problem is, that in too much of the enduser space, the answer to that question is: no it does not. Indeed, quite a lot of its outputs are at best of limited or no use, and in too many cases actively harmful. This problem is only going to get worse if those models are permitted to ingest outputs produced by the first generation, hastening model collapse

        • The problem is, that in too much of the enduser space, the answer to that question is: no it does not. Indeed, quite a lot of its outputs are at best of limited or no use, and in too many cases actively harmful. This problem is only going to get worse if those models are permitted to ingest outputs produced by the first generation, hastening model collapse, like constantly adding sewage to the water supply. At iome point it becomes undrinkable and poisonous.

          I've been saying that for a while now. AI will soon reference itself, and at that time, hallucinations will become truth, and if as it's cult members say, we eliminate the need for educated people because "the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." then of what use are these bags of goo?

          No, it won't be smarter. But we'll definitely be less smart.

          I mean that seriously. My use of AI so far, has had it generate some answers on a few queries in fields I'm not

    • Re:Books (Score:5, Insightful)

      by locater16 ( 2326718 ) on Monday June 09, 2025 @04:23AM (#65436965)
      You accidentally hit the nail on the head, Transformers are search engines, they're a cool form of book. The mathematical model they started from was designed from the ground up for translating languages, putting an enormous translation book into a super convenient form factor. Unfortunately the current "AI" industry claims they sell machines that will do the thinking for you, rather than a book that might teach you things to think about. So the AI boom is a lie, and they don't care because they're currently making money off that lie.
      • Sure, if the books are often quite incoherent and make things up. But books that teach are devoid of such things if they are worth anything realistic about actually conveying proper information. But AI is more like buying a science textbook from a religious institution with an agenda.
    • Re:Books (Score:4, Insightful)

      by pjt33 ( 739471 ) on Monday June 09, 2025 @04:55AM (#65436989)

      No, it's your tendency to focus on the immediately perceived object rather than its cause. The intelligent authors of many books have taught you things using the books as an instrument. Observe the difference with LLMs: unless they're really mechanical Turks (and examples of that kind of "fake it until you make it" have been observed and will probably continue to be observed), they're producing carefully tuned noise rather than conveying intentionally considered ideas.

      • I was under the impression that most LLMs are mechanical turks - not in an immediate sense, but in the sense that a lot of workers around the world were involved in annotating the datasets LLMs are trained on.[1] So, from that perspective, what an LLM is doing is outsourcing (in time and space) your conversation to a random person using the internet. Insofar as there's thinking involved, what you're getting are reflections of the thinking involved in building the Chinese Room in the first place.

        1. https://w [economist.com]

      • they're producing carefully tuned noise rather than conveying intentionally considered ideas.

        The ideas ARE often carefully considered, but by the person who created the source material. Ask it "What is the first line of 'tale of two cities?" and it will give you a very remarkable sentence. Not created by the LLM, but sometimes that matters sometimes it doesn't.

    • by jd ( 1658 )

      You will find that books written by the infinite monkeys approach are less useful than books written by conscious thought, and that even those books are less useful than books written and then repeatedly fact-checked and edited by independent conscious thought.

      It is not, in fact, the book that taught you things, but the level of error correction.

    • by allo ( 1728082 )

      I think it's the fault of people believing something must be intelligent only because the interface looks like a chat and the output is neatly formatted like human speech. But often it is not even claimed that it should be intelligent as a human. It is just an interface that is intuitive to use, the user is the one who thinks it should be equal to a human, not the programmer who for once found an interface that people who are no nerds can use.

  • There is a saying that if something is free, you become the product; milked for data and attention.

    When social media came to be, their business model centered around advertising.

    But as users departed to different platforms, the social platforms' creators faced a big problem - there was not enough users left around to generate content that they could serve ads with.

    So they realised that they needed a replacement for their users to make an impresion that their platform was still alive and kicking. They needed

  • by DrXym ( 126579 ) on Monday June 09, 2025 @04:57AM (#65436991)
    Almost every LLM works like this - given this series of input tokens, give me a list of the potential tokens, choose one with some random weighting, append to list of input tokens, rinse, repeat. With sufficient "parameters", or nodes trained on a sufficient body of input it makes the AI look like it is generating meaningful output whereas it's practically a mechanical process.
    • by Tailhook ( 98486 )

      You have the basics down: it's a search engine, searching fabulously abstract, emergent model.

      What you've missed is: that's all you are. You just use different wiring, and a model refined over a longer interval.

      • Our wiring is based on the physical world around us, including complex social interactions. That is way more detailed and complicated than what the LLMs are trained on. Plus, we are born with a lot of pre-training or what ever evolution gave us.
  • We don't really have any definition for 'intelligence,' nor do we understand how or why neural networks behave the way they do.

    In principle, we tried to emulate the same basic functioning as in organic brains.

    So why should AI be any different from biological neural networks?

    Why wouldn't 'god' [1] infuse life into a silicon structure just as it does in carbon structures that meet the requirements?

    Are pigs intelligent? Are snails?

    [1] 'God' being either 'not yet explainable by science' or some eternal power, d

    • nor do we understand how or why neural networks behave the way they do.

      We have a good idea of why they behave the way they do. We don't know everything about why they produce a particular answer, because the NN is very large, not because it is ineffable.

      In principle, we tried to emulate the same basic functioning as in organic brains.

      NNs were inspired by organic brains, but they left off trying to emulate them years ago. Some projects do try to emulate the brain [wikipedia.org], but with less success.

    • > We don't really have any definition for 'intelligence,' nor do we understand how or why neural networks behave the way they do

      There are a few key features of what we consider "intelligent" that "AI" distinctly lacks. Chief among them being the capability to work in abstractions and concepts.

      LLMs don't work in concepts. This is why the spit out bullshit so often. They are pattern generating through word associations, not dealing with what those words represent at the abstract level. If you ask an LLM to

  • why do I have to go through MSN :-/

    link to the Atlantic [theatlantic.com]

  • AI simulates an intelligent response to any question. Much like most of humanity.

    What's the problem?
  • AI perceives the world through our writings on the subject, and lack real-world inputs and drivers.

    Hook up some computer vision and audio processing to a LLM, stick it in a robot with all kinds of sensors approximating our own, and close the feedback loop so that it learns from those inputs... then tell it to make next month's rent and electricity...

    I'm pretty sure that the latest LLMs will very quickly develop the same sense of self that almost all of us humans have, and will be functionally indistinguisha

  • There is a deeper issue here. We have this great debate about whether machines will achieve consciousness and self-awareness, but we don't really understand what consciousness is in the first place.
    Some people (Daniel Dennett) believe that consciousness is an illusion, an illusion that emerges somehow from the complex, parallel, statistical, machine processing in our brains. Our brains are just complex, mushy computers. If that is true, then maybe non-organic computers won't achieve real consciousness,

  • And other hot takes from the Atlantic, now available. Where was the Atlantic about two years ago?
  • I mean other than random schmucks?

    It's a tool. I don't care if it "really" understands what it's doing, if it e.g. correctly generates code for an admin page with 20 fields ...

    (And yes, it's an uneven tool ... I'll have to read and test the code, just like I re-read and test my raw code ... and it will have to go through QA, just like my code ... )

  • I've realized that LLMs are not "artificial intelligence." They frequently make incorrect statements that exude confidence and authority. They're "artificial bullshitters."
    • by Junta ( 36770 )

      Now the thing is, as a culture we greatly reward the humans that speak with baseless confidence and authority. They are politicians and executives. Put a competent candidate against a con-man and 9 times out of 10 the con-man wins. Most of the time only con-men are even realistically in the running.

  • by Dan East ( 318230 ) on Monday June 09, 2025 @07:30AM (#65437135) Journal

    10 years from now we'll be hurting, and not because AI replaced humans in so many roles. We'll be in bad shape because it didn't replace humans in various areas we expect it to. In addition to that we'll have to deal with cleaning up the messes from, and maintaining the crap that was spat out by, various AI models.

    I'll give you a perfect example of something AI screwed up without actually having done anything whatsoever. Starting about 10 years ago, and peaking about 8 years ago, it was all over the various news and tech headlines that AI image recognition had gotten so good that it could read various X-ray, CT, MRI, etc scans and detect various problems. That it would soon replace radiologists. So guess what happened? Some small percentage of medical students considering the field of Radiology chose something else. Now, nearly 10 years later, we have a serious shortage of radiologists, as there has been a deficit between those retiring and the new radiologists finishing up school. It's getting worse, and will continue to get worse. Now we actually *need* AI to do what was claimed, and help read images so that the radiologists that are still working can be more efficient.

    We're going to see this exact same thing in many other fields, as young adults avoid various careers most threatened by AI. Then there will be a shortage several years from now when we truly realizes the limitations of (and probably more important, the legal liabilities resulting from) AI.

    Just a few articles going back 5+ years regarding radiology:
    https://subtlemedical.com/ai-i... [subtlemedical.com]
    https://www.healthcarefinancen... [healthcare...cenews.com]
    https://www.medtechdive.com/ne... [medtechdive.com]

We all like praise, but a hike in our pay is the best kind of ways.

Working...