
'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 206
The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines."
[OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."
Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....
The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."
Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....
The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.
Frenetic churn (Score:5, Insightful)
Re:Frenetic churn (Score:5, Funny)
like a college intern trying to BS their way through life.
Look, a Bachelors of Science is a perfectly acceptable degree, not everyone is suited to a Masters or PhD. Most of our scientific workforce have just fine BS degrees helping them through life. Oh, maybe you meant the other kind of BS degree getting people through life.
Re: (Score:2)
In the US, I've only ever seen it abbreviated as BS.
Re: (Score:2)
In the US, I've only ever seen it abbreviated as BS.
Yep, and it’s been a continuous intermittent source of humor for decades here.
Re: (Score:2)
Especially when it's Piled Higher and Deeper.
Re: (Score:2)
Oh? I've *always* written it as B.Sc. And I got mine in the last millenium, so I've been saying it a long time.
Re: (Score:2)
Obviously a joke, but doesn't really work when Bachelors of Science is never abbreviated to BS, but to BSc.
If I had a rubber for slashdot I could fix this.
Re: (Score:3)
Yup...I was confused at him replying he wanted to fuck it safely.....
At this point, I still have no idea what he's trying to say here.....
Re: Frenetic churn (Score:2)
Re:Frenetic churn (Score:5, Interesting)
All degree's are BS, it's not the degree but the social class you belong to, rich people get richer, poor people get poorer, welcome to our corrupt, irresponsible and unethical classist society.
When you allow nepotism and favoritism or even try to pick genders and races and similar groups by forcing equal adoption on people it never works out. Equality of opportunity is what solves this. I’ve been in charge of hiring and seen supposed mechanical engineering majors with a PhD and 10 years experience fail to explain, in high school physics simplicity, how a hammer works. I’ve always hired on ability, not degrees because I was quite financially invested in the success of the company.
Re:Frenetic churn (Score:5, Funny)
PhD and 10 years experience fail to explain, in high school physics simplicity, how a hammer works
No one knows. There's no rational explanation for how an entirely passive device is so reliable at seeking thumbs.
Re: (Score:2)
Re: (Score:2)
I’ve been in charge of hiring and seen supposed mechanical engineering majors with a PhD and 10 years experience fail to explain, in high school physics simplicity, how a hammer works.
My favorite example of this was a mechanical engineering graduate student with a focus on thermodynamics I lived with in college. He bought a large box of FlaVorIce and put it in the freezer expecting them to freeze overnight. I told him they wouldn't freeze quickly if you don't spread them out (like my farmer father taught me as a kid), and he sarcastically stated that a f*ing thermodynamics engineer could figure out how to freeze water. Fast forward to the morning, and the 10 freeze pops I separated from
Re: (Score:2)
All degree's are BS
I bet most people with degrees can form a plural without using an apostrophe...
Re: (Score:2)
You are correct.
When it comes to basic facts, if multiple AIs that have independent internal structure and independent training sets state the same claim as a fact, then that's good evidence that it's probably not a hallucination but something actively learned, but it's not remotely close to evidence of it being fact.
Because AIs have no understanding of semantics, only association, that's about as good as AI gets.
It's social media's fault (Score:3)
There is a saying that if something is free, you become the product; milked for data and attention.
When social media came to be, their business model centered around advertising.
But as users departed to different platforms, the social platforms' creators faced a big problem - there was not enough users left around to generate content that they could serve ads with.
So they realised that they needed a replacement for their users to make an impresion that their platform was still alive and kicking. They needed generated content so that they could stitch advertising in between fake posts.
And so, they thought about GenAI.
Firstly posts, of course, but now it seems that posts themselves are not enough so they are talking about generating AI friends.
It's all just desperation, and trying to save themselves from going bust.
Re: (Score:2)
That's bullshit. Generative AI came out of science (language modeling, dimensionality reduction, etc.), social media are late adopters, not the creators.
Entirely mechanical (Score:5, Insightful)
Re: (Score:3, Insightful)
You have the basics down: it's a search engine, searching fabulously abstract, emergent model.
What you've missed is: that's all you are. You just use different wiring, and a model refined over a longer interval.
Re: Entirely mechanical (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Tests by Anthropic show they are doing more than that. If you ask them to write poetry, they do some pre-planning so that the words rhyme. They already exhibit some concept of mind, like being able to imagine how a third party observer might think or feel about a situation they are watching. As they get better and faster, we are seeing emergent properties that go far beyond a simple mechanistic process.
I'm not suggesting they are becoming human or that they ever will. But they are showing that intelligence
Re: (Score:2)
Re:Entirely mechanical (Score:4, Insightful)
That's a pretty good description of human intelligence too. We like to imagine that all our actions are the result of rational thought and high level reasoning, but it's a delusion. Humans are deeply irrational. Most of what we do in the course of a day is just mechanical and reflexive, predicting patterns and executing stored procedures. We're capable of higher level thought too, but it takes work and its not the default mode for most of what we do.
LLMs are a lot more similar to human intelligence than you think. Not because they're more intelligent than you think, but because humans are less.
Re: (Score:2)
With sufficient "parameters", or nodes trained on a sufficient body of input it makes the AI look like it is generating meaningful output whereas it's practically a mechanical process.
a mechanical process cant produce meaningful output?
and we do not even fully understand how AI does what it does (ie how it organizes data and connections), hence AI's "black box", so our judgement is likely a bit premature.
many people tend to over-anthropomorphize AI... but on the other end, so many (older) tech folks try to place AI in a box thats limited by their own understanding and experience--theyre familiar with the color blue, so they say "AI is this shade of blue".
why no Atlantic link (Score:3, Informative)
why do I have to go through MSN :-/
link to the Atlantic [theatlantic.com]
AI simulates intelligence (Score:2)
What's the problem?
Re:AI simulates intelligence (Score:5, Insightful)
Re: (Score:2)
AI creates a statistically likely response. That doesn't make it intelligent.
No, the fact that it (sometimes) answers hard questions correctly makes it (somewhat) intelligent.
Its problem is that it doesn't seem to analyze its learning like humans do, so it seems not to be capable of saying it doesn't know something. It also seems to take marketing materials as seriously as it takes textbooks. But if these problems are fixed, there will probably still be components that operate based on statistics. As an aside, I'm reasonably sure you order and choose your words based on statistics.
Re: (Score:2)
AI creates a statistically likely response. That doesn't make it intelligent.
No, the fact that it (sometimes) answers hard questions correctly makes it (somewhat) intelligent.
If you roll a casino die and expect a six, it will occasionally provide a six. But that's not intelligence either. When the output is, essentially, random, it will occasionally be correct. Stopped clocks being right twice a day, and all.
That is not intelligence. That is random chance.
Re: (Score:2)
We are less intelligent than we'd like to believe. Explains a lot of humanity.
AI is not there yet, but it is getting some parts right, and even better. My inner language model definitely is not as good as chatgpt.
Re: (Score:2)
But it makes it useful.
If I cannot distinguish the AI response from your response, it doesn't matter if the AI or you have Intelligence or not. If the response is correct, it is useful.
Re: (Score:2)
correction "actual indian"
Re: (Score:2)
AI simulates an intelligent response to any question. Much like most of humanity.
Also, a simulation of intelligence is the same thing as intelligence. Intelligence is a kind of competence. It's not rocket science to test this. To everybody whining about computer programs not being intelligent: the lady doth protest too much, methinks.
Not so different from human intelligence. (Score:2)
AI perceives the world through our writings on the subject, and lack real-world inputs and drivers.
Hook up some computer vision and audio processing to a LLM, stick it in a robot with all kinds of sensors approximating our own, and close the feedback loop so that it learns from those inputs... then tell it to make next month's rent and electricity...
I'm pretty sure that the latest LLMs will very quickly develop the same sense of self that almost all of us humans have, and will be functionally indistinguisha
Re: (Score:2)
You're not the first person to come up with this idea. Turns out there are some issues with it.
Water is wet! (Score:2)
Who exactly thought it was? (Score:2)
I mean other than random schmucks?
It's a tool. I don't care if it "really" understands what it's doing, if it e.g. correctly generates code for an admin page with 20 fields ...
(And yes, it's an uneven tool ... I'll have to read and test the code, just like I re-read and test my raw code ... and it will have to go through QA, just like my code ... )
Re: (Score:2)
It's a tool.
Yeah but so is my prime minister[*]. Are you saying that means he's not intelli... oh never mind I think I just answered my own question there.
Not artificial intelligence (Score:3, Insightful)
Re: (Score:3)
Now the thing is, as a culture we greatly reward the humans that speak with baseless confidence and authority. They are politicians and executives. Put a competent candidate against a con-man and 9 times out of 10 the con-man wins. Most of the time only con-men are even realistically in the running.
Re: (Score:3)
10 years from now (Score:5, Insightful)
10 years from now we'll be hurting, and not because AI replaced humans in so many roles. We'll be in bad shape because it didn't replace humans in various areas we expect it to. In addition to that we'll have to deal with cleaning up the messes from, and maintaining the crap that was spat out by, various AI models.
I'll give you a perfect example of something AI screwed up without actually having done anything whatsoever. Starting about 10 years ago, and peaking about 8 years ago, it was all over the various news and tech headlines that AI image recognition had gotten so good that it could read various X-ray, CT, MRI, etc scans and detect various problems. That it would soon replace radiologists. So guess what happened? Some small percentage of medical students considering the field of Radiology chose something else. Now, nearly 10 years later, we have a serious shortage of radiologists, as there has been a deficit between those retiring and the new radiologists finishing up school. It's getting worse, and will continue to get worse. Now we actually *need* AI to do what was claimed, and help read images so that the radiologists that are still working can be more efficient.
We're going to see this exact same thing in many other fields, as young adults avoid various careers most threatened by AI. Then there will be a shortage several years from now when we truly realizes the limitations of (and probably more important, the legal liabilities resulting from) AI.
Just a few articles going back 5+ years regarding radiology:
https://subtlemedical.com/ai-i... [subtlemedical.com]
https://www.healthcarefinancen... [healthcare...cenews.com]
https://www.medtechdive.com/ne... [medtechdive.com]
Re: (Score:2)
Uhm, yeah, sure. A very, very, very small percentage of medical students didn't enter the field of radiology because of fears of AI. And now that very, very small percentage has somehow turned into a serious shortage of radiologists. Sure.
Re: (Score:2)
Re: (Score:2)
We'll be in bad shape because it didn't replace humans in various areas we expect it to.
Additionally, demographics (boomer retiring) will put additional stress on the system. In my field I am already seeing experts retiring and not getting replaced, as there is no talent pipeline of younger people.
The next 10 years will be rough even without AI.
There's a *collective* mind behind the AI output (Score:2)
The reason AIs are coherent at all is because they're trained on input that did come from intelligent minds. The AIs ingest and rearrange the text. The mind we imagine behind the text should be a collective mind, not an individual one. Such a collective may abstractly care about me but does not know me specifically, and does not care about me specifically. Mentally inserting the word "probably" at the start of every AI output sentence will help us all balance how much we can lean on the output vs not.
Hedged criticism (Score:2)
Right after quoting the two book authors who criticize AI as a scam:
To call AI a con isn’t to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands.
https://www.msn.com/en-us/tech... [msn.com].
In other words, the article, and the books, are being pedantic and making the point that the words marketers use to tout their AI products are overblown. Duh! That's what all marketers do! The article's author is *not* claiming that the technology is not useful.
I would suggest that the author of the article is mincing words to make a dramatic headline that will...draw clicks.
This, exactly... (Score:2)
... recent examples of Silicon Valley con artistry ...
P.T. Barnum said "The common man, no matter how sharp and tough, actually enjoys having the wool pulled over his eyes, and makes it easier for the puller". I think there's a lot of wishful thinking and "imagine if..." longing that has allowed for a sizable chunk of even the purveyors AI to have pulled the wool over their own eyes.
That's not to say there isn't a whole lot of wilful and calculated scammery going on in Silly Valley. But I think some of them feel that if they wish and hope and shill hard enoug
A sports journalist grapples with LLM bugs (Score:2)
A sports journalist for the Washington Post engages with an LLM to discuss articles she herself had written, and is appalled both by the number of errors. When she confronts its bug-laden responses, it meekly apologizes but doesn't get any better. After repeating its smarmy apology for the umpteenth time, the author begins to suspect that the LLM is actually malevolent. The entire "conversation" is laid out for all to see.
Infuriating to read if you know anything about what an LLM is and how it works.
https:/ [washingtonpost.com]
The old Chinese Room (Score:2)
It has held up for decades now, LLM's are everyone's boogeyman right now, but it's general ai that will be the groundbreaker.
https://en.m.wikipedia.org/wik... [wikipedia.org]
Did you just hear that? (Score:2)
Sounded like pop or a thud or something. And why is getting so cold right now?
What is HUMAN intelligence? (Score:3, Insightful)
Current LLM AIs reveal how much of what we consider "intelligence" is really "just" about language — its ability to encode, transmit, and instruct the processing of information. LLMs function as powerful interpreters of human language, ultimately translating it into machine language processed through computational logic. This appearance of intelligence largely stems from their ability to retrieve and statistically transform vast amounts of indexed content in response to human prompts.
But rather than demonstrating that machines are becoming human, this instead reveals how much of what we thought was uniquely human is actually mechanical capability performed by humans.
So what remains that is uniquely human intelligence, beyond the mechanical?
What are we beyond the language we think in and the actions we perform?
------
And what rough beast, its hour come round at last, slouches towards Bethlehem to be born?
AI is ... (Score:2)
Heaven forbid that with all this capital sloshing around, someone might actually use it to build a factory and produce goods in the USA.
Gonna be a hard no from me on those factories (Score:2)
Takes too many quarters. Want profit now. Can't wait five years. I need to buy a new helicopter for my service yacht's service yacht.
AI is not AI (Score:2)
Or rather Its an umbrella term for a bunch of different technologies
ranging from deterministic programs to neural networks.
The Atlantic is thick and does not understand the fundamentals.
I happen to have a degree in AI.
The truth is your "Robot Vaccum" is just a bunch of sensors and
some deterministic programming. The code is not that different
from what we experience in video games. Just in this case its
powering a physical object not an NPC. But put those sensors
in a bipedal robot and connect it to a cloud bas
What's the definition of "intelligence" after all? (Score:2)
If it was actual AI (Score:2)
you could tell it "do not infer references, or create them. ONLY use actual references".
Wrong argument (Score:5, Insightful)
No one is saying AI isn't useful, the argument is whether its intelligent in the human sense. The point is it doesn't need to be and it doesn't matter anyway - all that matters is whether it gives useful output that would be difficult or impossible to reproduce with conventional single level (ie writing code to solve the problem directly, not using a learning simulation of neurons) programming.
Re: Wrong argument (Score:2)
Re: (Score:2)
The problem is, that in too much of the enduser space, the answer to that question is: no it does not. Indeed, quite a lot of its outputs are at best of limited or no use, and in too many cases actively harmful. This problem is only going to get worse if those models are permitted to ingest outputs produced by the first generation, hastening model collapse
Re: (Score:3, Interesting)
The problem is, that in too much of the enduser space, the answer to that question is: no it does not. Indeed, quite a lot of its outputs are at best of limited or no use, and in too many cases actively harmful. This problem is only going to get worse if those models are permitted to ingest outputs produced by the first generation, hastening model collapse, like constantly adding sewage to the water supply. At iome point it becomes undrinkable and poisonous.
I've been saying that for a while now. AI will soon reference itself, and at that time, hallucinations will become truth, and if as it's cult members say, we eliminate the need for educated people because "the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." then of what use are these bags of goo?
No, it won't be smarter. But we'll definitely be less smart.
I mean that seriously. My use of AI so far, has had it generate some answers on a few queries in fields I'm not
Re: (Score:2)
Re:Books (Score:5, Insightful)
Re:Books (Score:4, Insightful)
Re:Books (Score:5, Insightful)
No, it's your tendency to focus on the immediately perceived object rather than its cause. The intelligent authors of many books have taught you things using the books as an instrument. Observe the difference with LLMs: unless they're really mechanical Turks (and examples of that kind of "fake it until you make it" have been observed and will probably continue to be observed), they're producing carefully tuned noise rather than conveying intentionally considered ideas.
Re: (Score:2)
I was under the impression that most LLMs are mechanical turks - not in an immediate sense, but in the sense that a lot of workers around the world were involved in annotating the datasets LLMs are trained on.[1] So, from that perspective, what an LLM is doing is outsourcing (in time and space) your conversation to a random person using the internet. Insofar as there's thinking involved, what you're getting are reflections of the thinking involved in building the Chinese Room in the first place.
1. https://w [economist.com]
Re: (Score:3)
they're producing carefully tuned noise rather than conveying intentionally considered ideas.
The ideas ARE often carefully considered, but by the person who created the source material. Ask it "What is the first line of 'tale of two cities?" and it will give you a very remarkable sentence. Not created by the LLM, but sometimes that matters sometimes it doesn't.
Re: (Score:3)
You will find that books written by the infinite monkeys approach are less useful than books written by conscious thought, and that even those books are less useful than books written and then repeatedly fact-checked and edited by independent conscious thought.
It is not, in fact, the book that taught you things, but the level of error correction.
Re: (Score:3)
I think it's the fault of people believing something must be intelligent only because the interface looks like a chat and the output is neatly formatted like human speech. But often it is not even claimed that it should be intelligent as a human. It is just an interface that is intuitive to use, the user is the one who thinks it should be equal to a human, not the programmer who for once found an interface that people who are no nerds can use.
Re: (Score:2)
Books are the product of intelligent humans.
When you read a book you benefit from their intelligence.
Books themselves are not intelligent in the same sense that people are intelligent. They are static representations of the intelligent thoughts of people but they are not intelligent. Just as AI is not intelligent. AI just parrots random stuff it reads on the internet. Sometimes the AI output reproduces the intelligent thoughts of humans but there is no ability for independent intelligent thought.
Re: Books (Score:2)
The dirty secret with the current AI "scam" is that there's a lot of manual tweaking and over-rides being added to hide shortcomings. A good example is that right now, if you ask google AI if you can cook food with gasoline it will flat out tell you "No." Which is demon
Re: Neither are we (Score:2)
Re: (Score:2)
chug...chug
Bollocks (Score:3, Insightful)
If we're not intelligent then the term is meaningless. And we're more that just a search engine (well, you might not be) because we have self awareness (yes, we do , its not an "illusion" as some idiots claim because otherwise who or what is the illusion fooling?)
"Many (most?) have labored under the hubris that there is something mysterious and unattainable about the human mind"
Few people claim that. What they do claim is that the human mind is way more complicated than was assumed plus it works in a differ
Re:Bollocks (Score:5, Interesting)
Natural NNs appear to use recursive methods.
What you "see" is not what your eyes observe, but rather a reconstruction assembled entirely from memories that are triggered by what your eyes observe, which is why the reconstructions often have blind spots.
Time seeming to slow down (even though experiments show that it doesn't alter response times), daydreaming, remembering, predicting, etc, the brain's searching for continuity, the episodic rather than snapshot nature of these processes, and the lack of any gap during sleep, is suggestive of some sort of recursion, where the output is used as some sort of component of the next input and where continuity is key.
We know something of the manner of reconstruction - there are some excellent, if rather old, documentary series, one by James Burke and another by David Eagleman, that give elementary introductions to how these reconstructions operate and the physics that make such reconstructions necessary.
It's very safe to assume that neuroscientists would not regard these as anything better than introductions, but they are useful for looking for traits we know the brain exhibits (and why) that are wholly absent from AI.
Re: (Score:2)
It's much more simple than that. It's nothing to do with the different ways in which AI and minds do information processing, but rather the fact that my mind (and yours, I presume) do something in addition to informatio processing - they experience things.
When an AI processes the idea of red, it manipulates some data that describe an object having the colour property #ff0000. When I think about something being red, I not only think about the raw fact of the thing being a certain colour but i also experience
Re:Neither are we (Score:4, Insightful)
AlphaEvolve is actively improving itself right now, both hardware and software.
And it’s going to be a complete piece of trash consuming vast amounts of energy for less and less of a return until improvements no longer are feasible on human lifetimes with the current pool of information. You see, the AI is derived from existing relations between words around things humans have created. To grow the AI you need ever larger datasets only we don’t have infinite accurate information it’s actually limited to not much over what they have already fed it. Lower and lower quality information is fed in as quality sources get expended, and progress stalls. We aren’t going to have an actual human like intelligence from simple word autocomplete in the next 50 years or ever actually. Maybe this type of system will be just one of 100k distinct and different algorithms all working together with even more vast amounts of power used.
Re: (Score:2)
They're not replicating our capabilities, nor could they. The architecture is completely wrong, as is the design philosophy. Brains are not classifiers, the way neural network software is, they are abstraction engines and dynamic compositors.
Re: (Score:3)
I get where you're coming from, but I think we also have to accept that we have only an extremely basic understanding of how the brain works, and still don't really understand what consciousness is at all. Our present situation reminds me of the early experiments with 'evolution' where people thought you could evolve bacteria by putting some chemicals in a glass jar. If they had had access to an electron microscope to see how complex even a single cell is, they would have realised how absurd it was to belie
Re: (Score:3)
"It's actually impressive that they are as useful as they have turned out to be."
A good analogy is flight - aircraft don't flap their wings like birds but need engines instead. Less efficient but still useful.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
it's somehow beyond any conceivable algorithm or scale we can possibly fathom.
It's at least beyond the current breed of "AI" technologies, even as those techniques get scaled to absurd levels they still struggle in various ways.
A nice concrete example, attempts at self driving require more time and distance of training data than a human could possibly experience across an entire lifetime. Then it can kind of compete with a human with about 12 hours of experience behind the wheel that's driven a few hundred miles. Similar story for text generation, after ingesting more material than
Re: (Score:2)
Although this is earlier than the current AI stuff, I do think we are missing something serious about how human brains work, because a child can learn to recognize a dog with only seeing perhaps a dozen dogs, they do not need to see ten million dogs and ten million "not dogs" pictures.
Re:Neither are we (Score:4, Insightful)
Interesting examples for sure. For me, they highlight the differences in development between the human mind and the AI "mind".
While it takes a huge amount of driving "training" to get an AI to drive well, it's worth remembering that a human has had at least 20 years of practice at general spacial awarness by the time they drive. Their brain doesn't distinguish driving from any other activity that involve the prediction, monitoring, avoidance, manipulation, etc. of objects in space. Driving is just one more instance of this, so just takes a bit of practice to turn the general skill into a particular one.
For language processing, the human has the advantage of having a lifetime of positive and negative feedback. All through your life you get it - if you talk rubbish you get laughed at, if you say wise things you get praised. It isn't the quantity of data that makes the human brain good at social interations, but rather the relentless feedback the mind gets while learning how to do it.
Re: (Score:2)
We are, each of us, about 3 lbs of low frequency nerve cells burning approximately 20 watts. Evolution used this bundle of nerves to create a staggeringly complex search engine, combining an inherited model with a limited learning mechanism and goal seeking.
Now, our huge, high frequency, inefficient, machine search engines are replicating our capabilities. Many (most?) have labored under the hubris that there is something mysterious and unattainable about the human mind: it's somehow beyond any conceivable algorithm or scale we can possibly fathom.
It's not. It never has been.
You're right noticing that the hardware of the human brain is hopelessly weak and outmatched. But perhaps paradoxically, this doesn't speak well of he current "AI"s. These models run on machines billions of times faster than our neurons, have access to gargantuan amounts of memory, and have been trained on almost everything humanity knows so far. And what's the result? Honestly, beyond pathetic. If the LLMs were indeed replicating our capabilities, then right now they would be super-minds dominating human b
Saved. (Score:2)
Not often I save text anymore but you made the cut here.
You're right noticing that the hardware of the human brain is hopelessly weak and outmatched. But perhaps paradoxically, this doesn't speak well of he current "AI"s. These models run on machines billions of times faster than our neurons, have access to gargantuan amounts of memory, and have been trained on almost everything humanity knows so far. And what's the result? Honestly, beyond pathetic. If the LLMs were indeed replicating our capabilities, then right now they would be super-minds dominating human brains as much as the light of the Sun dominates a small asteroid. But they aren't.
This means that LLMs work in quite a different way than a human brain. They're a dead end, with limited practical applicability. We need to stop the hype and invest our dollars in more AI research, which would lead to new technology, hopefully less wasteful and more useful.
also
Computers are so fast that this evolution should be billions of times faster than biological evolution. Why hasn't this "AI" evolution yielded something worthy so far?
For me the most telling thing is how much this impressed me at first and how I couldn't wait to see what came next.... and then over the next decade I was moderately impressed a few more times. Compare that to getting on the internet or getting my first smartphone or getting our first family computer or discovering the glories of unix and programming. Those wows seemed like they were steady and daily for years.
The wows are there but they're spa
Re: (Score:2)
Really nice of you to preserve the branding when you typed AlphaEvolve. I assume you're an investor?
Re: (Score:2)
Aside from researchers, nobody really cares if "AI" is intelligent, what we care about is the results and those are very, very interesting.
Well... You can say it's a "research" zone, but it has huge implications.
If the AI is mainly an advanced parrot, once the data become too small, or self-feeding, it has a huge problem of becoming wrong.
I have to say that these critics are too focused on LLMs while more and more AI is currently mixing different concepts while LLM remain more in the "talkative" layer more than in the "thinking" layer.
But it remains an acceptable concern that a lot of enterprises can be selling automated AI, and if that AI dep
Re: (Score:2)
nor do we understand how or why neural networks behave the way they do.
We have a good idea of why they behave the way they do. We don't know everything about why they produce a particular answer, because the NN is very large, not because it is ineffable.
In principle, we tried to emulate the same basic functioning as in organic brains.
NNs were inspired by organic brains, but they left off trying to emulate them years ago. Some projects do try to emulate the brain [wikipedia.org], but with less success.
Re:How is it not intelligent? (Score:5, Insightful)
> We don't really have any definition for 'intelligence,' nor do we understand how or why neural networks behave the way they do
There are a few key features of what we consider "intelligent" that "AI" distinctly lacks. Chief among them being the capability to work in abstractions and concepts.
LLMs don't work in concepts. This is why the spit out bullshit so often. They are pattern generating through word associations, not dealing with what those words represent at the abstract level. If you ask an LLM to describe a car, and by some quirk of how the question was asked it determines the word "bird" is statistically relevant as it processes the data. Perhaps part of the training data involved a journal of a famous ornithologist touring the world in their Ford Falcon, or some material discussing the history of cars since a lot of them are named after various animals including birds. In any case, the LLM stumbles upon a statistical link between cars and birds and this snowballs into a full-blown hallucination. Now it's explaining with full confidence and authority how cars have distinctive feathers and specialized beaks. How does this happen? Because it doesn't understand what cars or birds are and has no way to separate the words from the abstractions they represent.
> So why should AI be any different from biological neural networks?
You opened your post stating - correctly - that we don't fully understand how 'intelligence' works. Then you incorrectly declare that because we don't fully understand how A works, or how B works, why can't they be the same thing?
Except we know enough about A and B to have good reason to say they are not the same thing. We don't need to know what something is before we can know what it isn't.
=Smidge=
Re: (Score:2)
LLM's also aren't the only AI. They just so happen to be the most PUBLICLY KNOWN forms of current AI. And LLM's are specifically designed that they aren't going for direct intelligence. Some are building their own interesting internal "views" of what is is happening, but those are side effects of what they were designed to do.
There are tons of other trained Neural Networks that aren't directly public facing that are completely different. Like the medical imaging AI that is better than most doctors at readin
Re: (Score:2)
And for normal users it is just a blackbox that does what they expect it to do.
The general point being made is that it does *not* do what they expect it to do, but it looks awfully close to doing that and sometimes does it right until it obnoxiously annoys people.
Most laypeople I've interacted with whose experience has been forced AI search overviews are annoyed by them because they got bit by incorrect results.
The problem is not that the technology is worthless, it's that the "potential" has been set upon by opportunistic grifters that have greatly distorted the capabilities and have
Re: (Score:2)
This is an interesting topic, but how can you have an illusion of consciousness? To have a bona fide illusion (as opposed to making a mistake or being wrong about a proposition), must one not be conscious?
Re: (Score:2)
What about the other way round? Let's say we're someones AI and they are debating if we're conscious. How can we prove it to them?
Re: (Score:2)
Ah yes, those philosophers who doubt their own existence (but hope you'll buy their books.)
I've just discovered Scottish common sense realism, an 18th century philosophy that was a reaction against some of the Enlightenment who had gone off the rails in this regard. It was very popular among the founders of the US (we get the phrase, "we hold these truths to be self-evident" in the Declaration of Independence from it.)
Thomas Reid's essay, "An Inquiry Into the Human Mind" [earlymoderntexts.com] has a great take-down of this approa
Re: (Score:2)
Cogito ergo sum is often misunderstood just like in this text. The better translation is "The only things I know are: 1) I exist 2) I am thinking"
This model does not question existence, but it questions if anything but the thoughts themself exist, as we can not proof if our senses are real or just a product of our imagination. Only the two basic facts must be right, everything else can be a fabrication of our brain.
Re: (Score:2)
A LLM predicts only the next word. And my computer only transmits bytes.
But the bytes represent the text and hopefully the text conveys the idea I wanted to Just like this, the chain of next tokens solve the problem in the input text, even though it in each step only selects the next word. Breaking things down to their smallest parts always makes them look silly.