Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
AI

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 206

The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....

The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.

This discussion has been archived. No new comments can be posted.

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry

Comments Filter:
  • Frenetic churn (Score:5, Insightful)

    by sinkskinkshrieks ( 6952954 ) on Monday June 09, 2025 @03:48AM (#65436941)
    From the whole, utopian "we don't need software developers anymore" to "we really need software developers now because no one understands how to do anything more": from one extreme to another, which is incompatible with stable employment or investment by anyone. Perhaps instead of chasing hype with gusto extremism, folks should realize "AI" is basically glorified tab-completion prone to hallucinating incorrect results like a college intern trying to BS their way through life.
    • by burtosis ( 1124179 ) on Monday June 09, 2025 @05:04AM (#65437003)

      like a college intern trying to BS their way through life.

      Look, a Bachelors of Science is a perfectly acceptable degree, not everyone is suited to a Masters or PhD. Most of our scientific workforce have just fine BS degrees helping them through life. Oh, maybe you meant the other kind of BS degree getting people through life.

    • by jd ( 1658 )

      You are correct.

      When it comes to basic facts, if multiple AIs that have independent internal structure and independent training sets state the same claim as a fact, then that's good evidence that it's probably not a hallucination but something actively learned, but it's not remotely close to evidence of it being fact.

      Because AIs have no understanding of semantics, only association, that's about as good as AI gets.

  • by devslash0 ( 4203435 ) on Monday June 09, 2025 @04:22AM (#65436963)

    There is a saying that if something is free, you become the product; milked for data and attention.

    When social media came to be, their business model centered around advertising.

    But as users departed to different platforms, the social platforms' creators faced a big problem - there was not enough users left around to generate content that they could serve ads with.

    So they realised that they needed a replacement for their users to make an impresion that their platform was still alive and kicking. They needed generated content so that they could stitch advertising in between fake posts.

    And so, they thought about GenAI.

    Firstly posts, of course, but now it seems that posts themselves are not enough so they are talking about generating AI friends.

    It's all just desperation, and trying to save themselves from going bust.

    • by allo ( 1728082 )

      That's bullshit. Generative AI came out of science (language modeling, dimensionality reduction, etc.), social media are late adopters, not the creators.

  • by DrXym ( 126579 ) on Monday June 09, 2025 @04:57AM (#65436991)
    Almost every LLM works like this - given this series of input tokens, give me a list of the potential tokens, choose one with some random weighting, append to list of input tokens, rinse, repeat. With sufficient "parameters", or nodes trained on a sufficient body of input it makes the AI look like it is generating meaningful output whereas it's practically a mechanical process.
    • Re: (Score:3, Insightful)

      by Tailhook ( 98486 )

      You have the basics down: it's a search engine, searching fabulously abstract, emergent model.

      What you've missed is: that's all you are. You just use different wiring, and a model refined over a longer interval.

      • by simlox ( 6576120 ) on Monday June 09, 2025 @05:27AM (#65437021)
        Our wiring is based on the physical world around us, including complex social interactions. That is way more detailed and complicated than what the LLMs are trained on. Plus, we are born with a lot of pre-training or what ever evolution gave us.
      • by DrXym ( 126579 )
        Erm no, because humans reason, i.e. feed scenarios into their thought processes and evaluate outcomes. And they are affected by a greater manner of inputs and a wide scope of context than just some sentence. An LLM is basically a crank handle - same input token == exact same output token. LLMs attempt to mitigate with randomization of output (e.g. picking a token randomly based on statistical likelihood) but it's a simulacrum, nothing more.
      • by leptons ( 891340 )
        If you think that's all that Humans are, then you need your head examined. You've drank too much of the AI-hype kool-aide.
    • Tests by Anthropic show they are doing more than that. If you ask them to write poetry, they do some pre-planning so that the words rhyme. They already exhibit some concept of mind, like being able to imagine how a third party observer might think or feel about a situation they are watching. As they get better and faster, we are seeing emergent properties that go far beyond a simple mechanistic process.

      I'm not suggesting they are becoming human or that they ever will. But they are showing that intelligence

      • by DrXym ( 126579 )
        They really aren't doing more than I said. LLMs are trained on data in a way that given any set of input tokens, deterministically it will produce the exact same set of outputs. To mix things up, models will use a "temperature" parameter that will randomly select the next token from the list of most likely outputs so it appears more random than it is otherwise. If the temperature is too high, or the model is insufficiently trained, the response is garbage. If the temperature is too low the response is borin
    • by SoftwareArtist ( 1472499 ) on Monday June 09, 2025 @11:17AM (#65437585)

      That's a pretty good description of human intelligence too. We like to imagine that all our actions are the result of rational thought and high level reasoning, but it's a delusion. Humans are deeply irrational. Most of what we do in the course of a day is just mechanical and reflexive, predicting patterns and executing stored procedures. We're capable of higher level thought too, but it takes work and its not the default mode for most of what we do.

      LLMs are a lot more similar to human intelligence than you think. Not because they're more intelligent than you think, but because humans are less.

    • With sufficient "parameters", or nodes trained on a sufficient body of input it makes the AI look like it is generating meaningful output whereas it's practically a mechanical process.

      a mechanical process cant produce meaningful output?
      and we do not even fully understand how AI does what it does (ie how it organizes data and connections), hence AI's "black box", so our judgement is likely a bit premature.

      many people tend to over-anthropomorphize AI... but on the other end, so many (older) tech folks try to place AI in a box thats limited by their own understanding and experience--theyre familiar with the color blue, so they say "AI is this shade of blue".

  • why no Atlantic link (Score:3, Informative)

    by dr_blurb ( 676176 ) on Monday June 09, 2025 @06:15AM (#65437049)

    why do I have to go through MSN :-/

    link to the Atlantic [theatlantic.com]

  • AI simulates an intelligent response to any question. Much like most of humanity.

    What's the problem?
    • by chas.williams ( 6256556 ) on Monday June 09, 2025 @07:00AM (#65437091)
      AI creates a statistically likely response. That doesn't make it intelligent.
      • by piojo ( 995934 )

        AI creates a statistically likely response. That doesn't make it intelligent.

        No, the fact that it (sometimes) answers hard questions correctly makes it (somewhat) intelligent.

        Its problem is that it doesn't seem to analyze its learning like humans do, so it seems not to be capable of saying it doesn't know something. It also seems to take marketing materials as seriously as it takes textbooks. But if these problems are fixed, there will probably still be components that operate based on statistics. As an aside, I'm reasonably sure you order and choose your words based on statistics.

        • by taustin ( 171655 )

          AI creates a statistically likely response. That doesn't make it intelligent.

          No, the fact that it (sometimes) answers hard questions correctly makes it (somewhat) intelligent.

          If you roll a casino die and expect a six, it will occasionally provide a six. But that's not intelligence either. When the output is, essentially, random, it will occasionally be correct. Stopped clocks being right twice a day, and all.

          That is not intelligence. That is random chance.

      • statistically likely response... . Keep asking why you do something. As you dig through the thick layers of rationalization, you will likely see that at the bottom is a response that is trained to statistically favor a certain emotional outcome.
        We are less intelligent than we'd like to believe. Explains a lot of humanity.

        AI is not there yet, but it is getting some parts right, and even better. My inner language model definitely is not as good as chatgpt.
      • by allo ( 1728082 )

        But it makes it useful.
        If I cannot distinguish the AI response from your response, it doesn't matter if the AI or you have Intelligence or not. If the response is correct, it is useful.

    • by piojo ( 995934 )

      AI simulates an intelligent response to any question. Much like most of humanity.

      Also, a simulation of intelligence is the same thing as intelligence. Intelligence is a kind of competence. It's not rocket science to test this. To everybody whining about computer programs not being intelligent: the lady doth protest too much, methinks.

  • AI perceives the world through our writings on the subject, and lack real-world inputs and drivers.

    Hook up some computer vision and audio processing to a LLM, stick it in a robot with all kinds of sensors approximating our own, and close the feedback loop so that it learns from those inputs... then tell it to make next month's rent and electricity...

    I'm pretty sure that the latest LLMs will very quickly develop the same sense of self that almost all of us humans have, and will be functionally indistinguisha

  • And other hot takes from the Atlantic, now available. Where was the Atlantic about two years ago?
  • I mean other than random schmucks?

    It's a tool. I don't care if it "really" understands what it's doing, if it e.g. correctly generates code for an admin page with 20 fields ...

    (And yes, it's an uneven tool ... I'll have to read and test the code, just like I re-read and test my raw code ... and it will have to go through QA, just like my code ... )

    • It's a tool.

      Yeah but so is my prime minister[*]. Are you saying that means he's not intelli... oh never mind I think I just answered my own question there.

  • by RobinH ( 124750 ) on Monday June 09, 2025 @07:29AM (#65437133) Homepage
    I've realized that LLMs are not "artificial intelligence." They frequently make incorrect statements that exude confidence and authority. They're "artificial bullshitters."
    • by Junta ( 36770 )

      Now the thing is, as a culture we greatly reward the humans that speak with baseless confidence and authority. They are politicians and executives. Put a competent candidate against a con-man and 9 times out of 10 the con-man wins. Most of the time only con-men are even realistically in the running.

      • by RobinH ( 124750 )
        There are professions where your job is mostly bullshitting: politician, journalist (specifically writing editorials), sales, fiction author, HR, middle manager, etc. Those jobs are going to be impacted, or in some cases, replaced by LLMs. But those people (in particular the journalists who write these articles) seem to think that everyone else's job involves just as much bullshitting as theirs. But there are lots of jobs where being correct matters: engineering, technicians, warfighting, surgeons, const
  • 10 years from now (Score:5, Insightful)

    by Dan East ( 318230 ) on Monday June 09, 2025 @07:30AM (#65437135) Journal

    10 years from now we'll be hurting, and not because AI replaced humans in so many roles. We'll be in bad shape because it didn't replace humans in various areas we expect it to. In addition to that we'll have to deal with cleaning up the messes from, and maintaining the crap that was spat out by, various AI models.

    I'll give you a perfect example of something AI screwed up without actually having done anything whatsoever. Starting about 10 years ago, and peaking about 8 years ago, it was all over the various news and tech headlines that AI image recognition had gotten so good that it could read various X-ray, CT, MRI, etc scans and detect various problems. That it would soon replace radiologists. So guess what happened? Some small percentage of medical students considering the field of Radiology chose something else. Now, nearly 10 years later, we have a serious shortage of radiologists, as there has been a deficit between those retiring and the new radiologists finishing up school. It's getting worse, and will continue to get worse. Now we actually *need* AI to do what was claimed, and help read images so that the radiologists that are still working can be more efficient.

    We're going to see this exact same thing in many other fields, as young adults avoid various careers most threatened by AI. Then there will be a shortage several years from now when we truly realizes the limitations of (and probably more important, the legal liabilities resulting from) AI.

    Just a few articles going back 5+ years regarding radiology:
    https://subtlemedical.com/ai-i... [subtlemedical.com]
    https://www.healthcarefinancen... [healthcare...cenews.com]
    https://www.medtechdive.com/ne... [medtechdive.com]

    • Uhm, yeah, sure. A very, very, very small percentage of medical students didn't enter the field of radiology because of fears of AI. And now that very, very small percentage has somehow turned into a serious shortage of radiologists. Sure.

      • by sinij ( 911942 )
        I would say a significant number of students entering medical education are rational. When considering lifetime earnings, getting your wages suppressed due to AI automation would absolutely be a part of rational decision making process when selecting a specialization. Radiology would get de-prioritized due to these risks, with candidates that are entering this field ending up as lower quality (poor decision making or poor merit resulting in limited choices).
    • by sinij ( 911942 )

      We'll be in bad shape because it didn't replace humans in various areas we expect it to.

      Additionally, demographics (boomer retiring) will put additional stress on the system. In my field I am already seeing experts retiring and not getting replaced, as there is no talent pipeline of younger people.

      The next 10 years will be rough even without AI.

  • The reason AIs are coherent at all is because they're trained on input that did come from intelligent minds. The AIs ingest and rearrange the text. The mind we imagine behind the text should be a collective mind, not an individual one. Such a collective may abstractly care about me but does not know me specifically, and does not care about me specifically. Mentally inserting the word "probably" at the start of every AI output sentence will help us all balance how much we can lean on the output vs not.

  • Right after quoting the two book authors who criticize AI as a scam:

    To call AI a con isn’t to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands.

    https://www.msn.com/en-us/tech... [msn.com].

    In other words, the article, and the books, are being pedantic and making the point that the words marketers use to tout their AI products are overblown. Duh! That's what all marketers do! The article's author is *not* claiming that the technology is not useful.

    I would suggest that the author of the article is mincing words to make a dramatic headline that will...draw clicks.

  • ... recent examples of Silicon Valley con artistry ...

    P.T. Barnum said "The common man, no matter how sharp and tough, actually enjoys having the wool pulled over his eyes, and makes it easier for the puller". I think there's a lot of wishful thinking and "imagine if..." longing that has allowed for a sizable chunk of even the purveyors AI to have pulled the wool over their own eyes.

    That's not to say there isn't a whole lot of wilful and calculated scammery going on in Silly Valley. But I think some of them feel that if they wish and hope and shill hard enoug

  • A sports journalist for the Washington Post engages with an LLM to discuss articles she herself had written, and is appalled both by the number of errors. When she confronts its bug-laden responses, it meekly apologizes but doesn't get any better. After repeating its smarmy apology for the umpteenth time, the author begins to suspect that the LLM is actually malevolent. The entire "conversation" is laid out for all to see.

    Infuriating to read if you know anything about what an LLM is and how it works.

    https:/ [washingtonpost.com]

  • It has held up for decades now, LLM's are everyone's boogeyman right now, but it's general ai that will be the groundbreaker.
    https://en.m.wikipedia.org/wik... [wikipedia.org]

  • Sounded like pop or a thud or something. And why is getting so cold right now?

  • by umopapisdn69 ( 6522384 ) on Monday June 09, 2025 @10:59AM (#65437547)

    Current LLM AIs reveal how much of what we consider "intelligence" is really "just" about language — its ability to encode, transmit, and instruct the processing of information. LLMs function as powerful interpreters of human language, ultimately translating it into machine language processed through computational logic. This appearance of intelligence largely stems from their ability to retrieve and statistically transform vast amounts of indexed content in response to human prompts.

    But rather than demonstrating that machines are becoming human, this instead reveals how much of what we thought was uniquely human is actually mechanical capability performed by humans.

    So what remains that is uniquely human intelligence, beyond the mechanical?
    What are we beyond the language we think in and the actions we perform?

    ------
    And what rough beast, its hour come round at last, slouches towards Bethlehem to be born?

  • ... another big hole into which Private Equity and Private Credit firms can shovel cash. Just to keep the funny money machines running a little while longer so the whole mess doesn't collapse like it did in 2008.

    Heaven forbid that with all this capital sloshing around, someone might actually use it to build a factory and produce goods in the USA.

  • Or rather Its an umbrella term for a bunch of different technologies
    ranging from deterministic programs to neural networks.
    The Atlantic is thick and does not understand the fundamentals.
    I happen to have a degree in AI.

    The truth is your "Robot Vaccum" is just a bunch of sensors and
    some deterministic programming. The code is not that different
    from what we experience in video games. Just in this case its
    powering a physical object not an NPC. But put those sensors
    in a bipedal robot and connect it to a cloud bas

  • Is AI getting pretty good at repetitive tasks? Yes. However, those tasks are not what most people call intelligence. Of all the time I've been around, the best (IMO) description I've heard of "intelligence" is "problem-solving ability." At this, I've not seen AI become particularly good. It can be quite good at collecting/collating/analyzing/presenting analysis of large data sets, but deciding what to do with that information, I've not encountered it being good at.
  • you could tell it "do not infer references, or create them. ONLY use actual references".

"Show me a good loser, and I'll show you a loser." -- Vince Lombardi, football coach

Working...