New Research Reveals AI Lacks Independent Learning, Poses No Existential Threat (neurosciencenews.com) 129
ZipNada writes: New research reveals that large language models (LLMs) like ChatGPT cannot learn independently or acquire new skills without explicit instructions, making them predictable and controllable. The study dispels fears of these models developing complex reasoning abilities, emphasizing that while LLMs can generate sophisticated language, they are unlikely to pose existential threats. However, the potential misuse of AI, such as generating fake news, still requires attention. The study, published today as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) -- the premier international conference in natural language processing -- reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe. "The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus," said Dr Harish Tayyar Madabushi, computer scientist at the University of Bath and co-author of the new study on the 'emergent abilities' of LLMs.
Professor Iryna Gurevych added: "... our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news."
Professor Iryna Gurevych added: "... our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news."
Well duh (Score:5, Insightful)
Re: (Score:3)
Yep. It was mostly hype that got us to this point. The people who made the "AIs" (because they're not really intelligent) had a financial interest in them seeming powerful and spooky, it made them seem more valuable. The more examples I see, the more it strikes me that they're really not all that different from a simple Markov text generator in ability, just with a very large corpus and a large text buffer. I've been sure that there had to be more to them than that, but geez, I keep being proven wrong.
Re:Well duh (Score:4, Insightful)
The researchers in the field have been constantly hinting that they had something more but weren't able to release it or show it to people. Particularly some kinds of models with incremental feedback or with loops in the model that would allow buffers like short term memory for learning. There has always been the message that, "just around the corner there's something amazing".
Companies like Tesla have invested huge huge amounts into the belief that, given that they had solved 90% of cases, the last 10% would be possible. Luckily for them they look like they will be able to dump this mistake on their customers, but that isn't 100% sure yet. Definitely, if there isn't anything more, plenty of investors are going to lose big chunks of money.
Re: Well duh (Score:2)
Re:Well duh (Score:5, Insightful)
Indeed. Or even more general, any type of statistical model, really. Statistical models do not have and cannot have reasoning ability. They can only fake it to a degree if enough reasoning steps are in their training data.
The problem is not that the knowledge and insight was not there. The problem is that too many people are too much in love with their own fantasies and do not listen.
Re: (Score:2)
Re: (Score:2)
Exactly. Dunning & Kruger is perhaps one of the most important results about human cognition, ever.
The problem with "experts" is that many that think they are, are actually not. A defining characteristic of an expert is that they know exactly how far their expertise carries and are capable and careful not to overstep those bounds. The other problem with "experts" is that, as you say, no-honor individuals that will simply lie to gain money. That is why in many fields, the qualifications or an actual expe
Re: Well duh (Score:2)
Re: (Score:3)
Even when there are reasoning steps in their training data, it isn't able to understand them in the way that a human can.
For example, ask it to multiply two 4 digit numbers together, and it will likely give you the wrong answer.
Any basic computer with a sufficiently large memory to hold such a large number can do that with ease, but an LLM that has ingested every single book on how to do arithmetic can't do it if it hasn't seen that specific calculation before.
Re: (Score:2)
Even when there are reasoning steps in their training data, it isn't able to understand them in the way that a human can.
Exactly. And it may apply them incorrectly, fail to apply them or apply them in a situation where they do not fit. Hence it can only make a guess, while an actual reasoning ability provides a far sharper tool.
For example, ask it to multiply two 4 digit numbers together, and it will likely give you the wrong answer.
Any basic computer with a sufficiently large memory to hold such a large number can do that with ease, but an LLM that has ingested every single book on how to do arithmetic can't do it if it hasn't seen that specific calculation before.
Yes. An LLM can do what a multiplication table does if it has seen enough multiplication tables. It cannot come up with any extensions of that table. A computer algebra system (which we have had for > 40 years now), on the other hand, can do some automated reasoning and fact-checking in a very speci
Re: (Score:2)
40 years ago takes us to 1984, and the launch of the Apple MacIntosh. We had computers doing arithmetic calculations long before that, certainly in the 1940s.
Re: (Score:3)
Re:Well duh (Score:4, Insightful)
Re: Well duh (Score:2)
Re: (Score:3)
It would do if it could do this reliably. However, it is far too unreliable and unpredictable at doing so, and due to the nature of the models it will never get mcuh better at it. As is constantly shown, a little bit of human ingenuity applied to the prompts keeps finding ways to cause output which those running these models will find undesirable. The car sales model selling cars for $1 is one example but there are many more. This is not a fixable problem with these models.
So there are some uses. The versio
Re: (Score:2)
Re: (Score:2)
They are "correlation engines" and a well trained correlation engine looks intelligent *IF* you don't test it with anything funny. Once you start testing it with anything funny it starts producing what appears to be almost random often wrong responses.
The old Eliza program fooled some people, and given the simplicity of that program it is not hard to believe someone could spend some time improving Eliza and get it doing a better job than these AIs. At least Eliza had a section to return a mostly reasona
Re: (Score:2)
Re: Well duh (Score:2)
Re: (Score:2)
Anyone that knows how transformer models work could've told you this.
What we're calling "AI" these days isn't really artificial intelligence. It's a glorified script that can adapt to a history of inputs. AI, as we're using it, is a slick marketing term for "software that will automate your job away". But like all computing/software that came before it, it's stupid. Nothing has changed about "Computers are dumb. They only know what you tell 'em.
Re: (Score:2)
A transformer model could also have told you that.
Re: (Score:2)
Anyone that knows how transformer models work could've told you this.
Well that would come as a surprise to a lot of the researchers working on these models who have been trying to figure that out how the models were achieving the reasoning they showed.
Just because we don't know how emergent abilities emerge doesn't mean that a statistical model with an unreal amount of data can't display emergent abilities. That doesn't mean they're conscious or even "intelligent". But the emergent abilities could be legit.
Also note, this is one paper presenting a theory and trying to prove
Misdirection (Score:4, Insightful)
Re: (Score:3)
Re:Misdirection (Score:5, Interesting)
What morals? Humanity started two world wars and an unending amount of small ones without needing AI fueled disinfo campaigns.
Meanwhile social cohesion is fragmenting in advanced nations due to demographic collapse and mass immigration. Resource exhaustion everywhere, of minerals, water, livable climate and biosphere, functioning antibiotics etc etc. AI use by humanity is a drop in the ocean of existential threats.
AI becoming conscious and deciding to rule is the only realistic way I see for technological society to make it out of this century. Otherwise we'll go back to being moral with more primitive society and weaponry.
Re: (Score:2)
The issue isn't the software (Score:5, Insightful)
There's no chance this statistical software garbage is going to suddenly grow sentient. Anybody who thinks so doesn't understand the tech.
No, the issue is that people will think that the software is actually concious, and then they'll authorize real world decisions based on the software, often without real human oversight. So far, it's definitely happening in modding, like on Yahoo News. These automated software decision systems will slowly leak out into other parts of life, from insurance to schooling. It's gonna be a clusterfuck.
Re: (Score:2)
There's no chance this statistical software garbage is going to suddenly grow sentient
Are you sure?
I might remind you that we humans were once a bunch of slime floating in a bond. Life sprang up from random chemicals and sentience came out of life after a series of lucky random events occuring over millions of years.
We're fast-forwarding evolution by many orders of magnitude. I wouldn't discount the possibility of the current generation of dumbass generative AIs evolving into something sentient very quickly.
Re:The issue isn't the software (Score:4, Insightful)
I would. It's all how they're made. It's made specifically to _look like someone's thinking_ without any of the actual thought, which is why it frequently turns out to be wrong about things, often in ways that only the people who really know the subject will detect. It's the ultimate bullshitter.
Re: (Score:2)
Indeed. Add that the current hype-AI tech has zero reasoning ability, and the whole idea is just ridiculous.
Re: (Score:2)
Indeed. Add that the current hype-AI tech has zero reasoning ability, and the whole idea is just ridiculous.
You’re right. It’s ridiculous.
Almost as ridiculous as realizing we already went through a dot-bomb era of vaporware, and learned absolutely fucking nothing from it.
In other words, we ignorant fucks deserve our fate. Again.
Re: (Score:2)
In other words, we ignorant fucks deserve our fate. Again.
As a group? Definitely. The average crowd of humans is fucking dumb and incapable of learning from experience.
Re:The issue isn't the software (Score:5, Insightful)
AIs don't evolve.
Also, an LLM is just a fancy statistical model. It can't have any concept of reality, because everything is meaningless to it. It doesn't deal in facts, it deals with rearranging words in ways that it's seen before. It doesn't have intelligence, nor can it think.
All you have to do for the next step is tell it what is real and what isn't. Please consult with your world religions before telling it which gods are real, by the way. Many people will get violently offended if it disagrees with them.
Re: (Score:2)
> AIs don't evolve
Some do - it took all of two days for Microsoft's chat bot to turn into a rampant racist: https://www.bbc.co.uk/news/tec... [bbc.co.uk]
If ever LLMs are allowed to learn based on user input, you can expect them to end up the same way.
Re: (Score:2)
Re: (Score:2)
But that's 'learning', not 'evolution'. Learning means that it summarizes the inputs and can tell them back.
Evolution is when its survival depends on what it says, so instances that say the wrong output are eliminated and therefore the output adapts to better avoid the killing conditions.
Re: (Score:3)
AI models don't learn. They exist as a series of weights in an array somewhere, and those weights never change once the model has been trained. Until they change that limitation - until they design a system that allows the model to change while in operation - they can't evolve on their own.
Not that it seems to be an insurmountable problem, but it isn't any existing AI that will do so.
Re: (Score:2)
Considering what happened when Microsoft attempted a chatbot names Tay [1], I kind of don't blame them for not letting the models learn on the fly...
[1] https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
You don't have to change weights in order to learn. You can make an AI that changes it's own input, which it takes next time, in order to remember the past decisions. Here is an example of an AI that builds a library of commands, building more complex commands using previously made simpler commands as building blocks:
https://www.zdnet.com/article/... [zdnet.com]
Re: (Score:2)
The implied conclusion is something you can arrive at when you mistake Physicalims for Science. It is not. It is belief.
That said, software running on digital execution mechanisms cannot become sentient, ever. Software of this type is and remains fully deterministic. That precludes sentience reliably.
Re: The issue isn't the software (Score:2)
Re: (Score:2)
Why would I waste my and my student's time with idiotic models of reality?
Re: The issue isn't the software (Score:2)
Re: (Score:2)
That said, software running on digital execution mechanisms cannot become sentient, ever. Software of this type is and remains fully deterministic. That precludes sentience reliably.
You're assuming that sentient life isn't fully deterministic and that digital execution is fully deterministic.
I can simulate the actions of most people on the planet with about 100 lines of code and generating random numbers is a thing.
Re: (Score:2)
You're assuming that sentient life isn't fully deterministic and that digital execution is fully deterministic.
No. I am just applying the definition. Which you apparently do not know.
I can simulate the actions of most people on the planet with about 100 lines of code and generating random numbers is a thing.
No, you cannot. Apparently you have never simulated anything. And I will not get into a discussion about "random" numbers. That is an area where the less people know, the more mysteries they see.
Re: (Score:3)
Software of this type is and remains fully deterministic. That precludes sentience reliably.
Here you are making religious claims about intelligence again.
I agree with you that LLMs cannot become sentient, but not for that reason. It's because they aren't capable of any introspection.
You believe determinism precludes intelligence only because you want to believe that you are special, not because of any evidence. We are still finding complexity in the human brain, and we are still building new types of models which do parts of things we call "thinking". As long as that is true, we cannot speak even
Re: (Score:2)
No, but you are not thinking clearly or rather you are not conversant with the relevant definitions. Sentience requires more than intelligence. In fact, intelligence is _optional_ for sentience. What _is_ required is a from of consciousness. And that cannot be done by a deterministic system. Completely impossible. Whether it can be done by a purely physical system (which would require randomized quantum-effects to be used, a thing that is not understood at all and can only be modelled) is up for debate.
Here
Re: (Score:2)
Are you sure you are not simply being an insightless asshole there, again?
Re: (Score:2)
The way it is done precludes sentience.
LLMs examine syntax but has no concept of semantics.
Every single natural brain examines semantics, syntax is an addition that comes much later.
My contention is that you could build a NN that is semantic-driven not syntax-driven, and that this could conceivably develop sentience.
Re: (Score:3)
The way it is done precludes sentience.
LLMs examine syntax but has no concept of semantics.
Every single natural brain examines semantics, syntax is an addition that comes much later.
My contention is that you could build a NN that is semantic-driven not syntax-driven, and that this could conceivably develop sentience.
When you pause and stop thinking, are you still sentient?
Obviously yes. Sentience just means a being, able to experience whatever phenomena are happening. The phenomena could be sights, sounds, sensations, feelings, and also more cognitive things like, intuitively knowing things, and lastly, discrete rational thoughts and symbols. Honestly at the moment we don't know if ants are sentient -- they could be little beings crawling around experiencing the ant-world of phenomena -- point is, sentience is just a b
Re: The issue isn't the software (Score:2)
Re: (Score:2)
There's no chance this statistical software garbage is going to suddenly grow sentient. Anybody who thinks so doesn't understand the tech.
Indeed. And additionally does not understand what sentience entails and requires.
No, the issue is that people will think that the software is actually conscious, and then they'll authorize real world decisions based on the software, often without real human oversight. So far, it's definitely happening in modding, like on Yahoo News. These automated software decision systems will slowly leak out into other parts of life, from insurance to schooling. It's gonna be a clusterfuck.
It is definitely going to be a clusterfuck and one with massive damage to society. All because some assholes had to get rich. That said, I see a lot of bureaucracy to have the potential to be done by specialized LLMs (on the side of the perpetrators) and that may lead to a massive job-loss in 10 or 20 years. Th thing is that most of that bureaucracy was accessible to automation before, it was just not cost-effective. That may c
Re: (Score:3)
NewResearch Reveals AI Poses no Existential Threat (Score:5, Funny)
Famous last Slashdot article.
Re: (Score:2)
Write a Slashdot article.
ChatGPT:
Title: Breakthrough in Quantum Computing: Researchers Achieve Stable Quantum Entanglement at Room Temperature
Posted by: yourSlashdotUserID on Tuesday August 14, 2024 @05:00PM
from the quantum-leap dept.
Quantum computing has long been touted as the next major leap in computing technology, but progress has been hampered by a series of formidable technical challenges. One of the most significant obstacles has been maintaining quantum entanglement — the mysterious link betw
Re: (Score:2)
This was the joke I was searching for...
Bloom (Score:2)
Re: (Score:2)
I put some of my exams through AI. It can only do the low Bloom level and has an 100% failure rate as soon as some understanding is required. I did expect it to get the occasional tricky (but not complex) and non-standard question right, but it was a flat 0% performance on those parts.
Re: (Score:2)
https://www.technologyreview.c... [technologyreview.com]
"solving the six problems given to humans competing in this year’s IMO and proving that the answers were correct. AlphaProof solved two algebra problems and one number theory problem, one of which was the competition’s hardest. ... A human participant earning this score would be awarded a silver medal"
Perhaps you just used the wrong AI?
...yet (Score:3)
is what is missing in this analysis.
Same as with internet or social media... (Score:2)
The biggest danger are the things the evil people can do with the new technology.
It will take us at least couple dozen years to learn to handle it.
Hitler mastered radio what allowed him control of German masses. Hopefully we will not reach this point.
Plot twist (Score:2, Funny)
This research was actually carried out by an AI posing as human researchers, to lull us into a false sense of security.
Re: (Score:2)
This research was actually carried out by an AI posing as human researchers, to lull us into a false sense of security.
Or posted by human pretending to be an AI, in order to keep their job.
That needed research? (Score:3)
My take is that was entirely clear from the start. To anybody with a working mind, that is, so only to a minority of people.
Yet (Score:2)
re: subject
US political campaigns (Score:2)
There's already evidence of AI creating deep-fake photos: Why did they ignore that? US political campaigns contain an almost impossible-to-believe, absence of facts. With AI, a US political party doesn't need fancy editing of nay-sayers giving an ambiguous recollection: A blurry deep-fake photo will contain the very crime that attack adverts can only suggest.
FTFY: US political campaigns (Score:4)
There's already evidence of AI creating deep-fake photos
There's already evidence of people using "AI" to create deep-fake photos.
I'm sure that's what you meant, but most people are confused on this point, which is the point of the article of course. Gotta be careful how you word things.
This isn't new; people have been making convincing fake photos since photography was invented.With just editing, convincing fake video and audio has been created for decades. Concerning images, the previous leap was Photoshop. Now with "AI" we've got much higher quality, and even completely synthetic video, and it is affordable and accessible. Before too long it will be impossible to trust any photos or video that you see on TV or the Internet (or in a court of law). "Interesting" times are coming.
Makes me think about, among other things, the Witnesses in ...was that Heinlein?
Existential? (Score:2)
No, but they are definitely a game changing tool.
Well, duh (Score:3)
This is obvious, so I'm not sure why anyone had to study it. That said, LLMs are hardly the final development in AI.
One important step will be allowing a system to have an internal dialog, i.e., converse with itself. At present, when you have finished an interaction with an LLM, that's it, it's over. The next interaction begins from the LLM's original state. Allowing a system to be "continuously on" and change over time - that's when we may see something like AGI emerge.
Not predictable (Score:4, Informative)
LLMs aren't predictable. Giving them the same input twice does not produce the same result.
Re:Not predictable (Score:5, Insightful)
That's because there's an input that's invisible to you which is the output of the random number generator (RNG). When you keep that input constant as well as your own input then the results end up being the same. If you have ever used an image generator the "seed" you can sometimes keep and reuse is exactly the thing you need to get the RNG to repeat the same input to the model.
Re: (Score:2)
Clueless people out there talking about AI (Score:3)
The current AI does not have the idea of understanding concepts, or to group things together in categories that have not been previously defined by programmers. Without the ability to conceptualize and categorize by itself, AI won't get "free will". Note that this does not mean that poorly implemented AI can't be a threat, but it does mean that people don't need to worry about serving robotic overlords or anything of that sort.
With that said, those without SKILLS will always be the first to have their jobs taken by automation of any kind, which is why training and learning how to do things that require more intelligence than doing basic tasks will be critical for people who want to be able to find and keep their jobs for the next 30 years. If your primary job responsibilities have you doing a lot of repetitive tasks that requires a lot of effort but isn't terribly complicated, then you will REALLY want to take classes to gain skills so you can be doing things that aren't that simple, otherwise you will find yourself out of a job in the next 10-20 years, and the older you get, the harder it is to get a new job, no matter how skilled or experienced you may be. For those 50 and older, that becomes a very serious reason to be concerned when it comes to job security.
Re: (Score:2)
AI won't get "free will".
First I think you'd have to define exactly what you mean by free will, before that can of worms can be properly opened.
Re: (Score:2)
Answer the question, "What do you want to do?". Being able to do things, vs. the desire to do things. If you have the choice of 25,000 different things, without something pre-setting decision "weights", you now have random choice, which means there is no decision-making involved, or, you have the ability to sift through and make a decision based on a desire, something that improves itself, improves other things, harms itself,or harms other things, or something else. If there is no preference for thing
Well yeah (Score:2)
I'm not an AI expert but I would expect the massive amount of money it costs to train one before it's ready to be put in a can and used pretty much tells anyone that the current models are not 'evolving' once put in service. I suppose we could build models that would fine tune in service but I don't believe that's how they work outside accumulated state from past prompts.
Or am I completely wrong? Always good to learn, and I welcome our new AI overlords.
When Theory meets Reality. (Score:2)
(AI Theory) ”No Existential Threat Here!”
(AI Reality) ”Did we fire another 5,000 workers yet? We’ve got AI to invest in!”
Maybe the clickbait pimps will figure it out in the unemployment line..
Re: (Score:2)
The existential threat implied is a bit different to what a sql database did to office workers.
Re: (Score:2)
The existential threat implied is a bit different to what a sql database did to office workers.
Not according to the reason/excuse CEOs are giving for mass layoffs. Greed isn’t always smart. But it is always greedy.
Those standing in the unemployment line, don’t really give a shit if you call it a threat or a chicken sandwich. They’re still standing in the unemployment line validating a threat Greed wants to deny.
I feel much better now.... (Score:4, Funny)
I feel much better now....or I did, until i learned that the study was authored by an LLM chatbot.......
csw
Current models - current LLM models (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Yes, but that is because current LLMs use a transformer architecture. Other architectures, such as state space, are coming. I am not certain, but I believe that a state space model might be amenable to ongoing training.
I think we’re getting far too hung up on the human concept of “learning” here with AI or LLMs.
Heres an example; learning a foreign language. Humans have to actually learn it. How to speak it. How to write it. How to converse and translate with it.
In comparison, do you really think the LLM/AI has to “learn” that language after we upload every word, language rule, variation and permutation of that language into the system, or do you think that upload action turns “learni
Re: (Score:2)
Hi. But do humans really "learn" a language in a cognitive sense? Or do we become familiar with phrases that convey the meaning that we want to convey?
Years ago I sought to learn the guitar. I did not pay attention to the theory - I just tried things, and found the patterns that created the sounds I want. When I play (not well), I don't really know what I am doing.
I think that a lot of human learning is like that: unconscious.
I am sure you know that today's LLMs don't have any pre-programmed grammar rules.
Re: (Score:2)
>> do humans really "learn" a language in a cognitive sense.
The generally held position is yes we really do, and there are plenty of studies that back that up.
Re: (Score:2)
I am unsure that ongoing training without strict guardrails will ever be a thing in commercial AI. Look at humans - there is no quality assurance on our training, the training data is not curated, and we occasionally get some very bad results.
Re: (Score:2)
Maybe I'm reading your point wrong, but you apparently have this exactly backwards.
Most if not all training currently is without strict guardrails. There's not even a standard for guardrails yet.
Training data generally is curated though, at lest for anti social stuff like extreme politics, racism, sexism and porn.
Re: (Score:2)
LLMs have a few issues.
Training on synthetic data is like trying to make a perpetual motion machine - it doesn't work and everything decays to uselessness pretty quickly. Training on real-world data rapidly causes the LLM to turn stupid and offensive to the point it is commercially unviable.
The typical guardrails right now are "freeze it on release" and/or "add output filters". Not unheard of is, "oops, we let that thing keep training on interactions with the public, and now it wants to start the Fourth Re
Re: (Score:2)
LLM's aren't retraining or fine-tuning themselves during the conversation,
Actually they kind of are, by cacheing the last n interactions and including that data in the next interaction, but no they are not persistently modifying their own model. Yet.
The current (hacky) workaround is to just increase the size of n, but doing so obviously imposes more processing to generate each response.
Re: (Score:2)
Assuming the research is solid, there's no particular reason to automatically assume we are going to get somewhere really new.
Despite all the investment we've been able to scale up (with diminishing returns) and 'tweaking' the application of the methodology proven out in 2018 (which took a couple of years to scale up to an actually compelling demonstration). There's not particular sign of a new fundamental breakthrough on the horizon. It might happen, or not, it's hard to predict given the current state of
bait and switch (Score:2)
But that's not the issue.
MBA Groupthink is the issue: Save Money. Layoff people doing drudgery. Replace with low monthly fee to {Large Tech Company}. Increase share price. Get Xmas bonus. Thinking is for old people who don't use AI.
Here's a great idea: Get AI to manage the nuclear arsenal in real time.
It's going to happen. That's an existential threat, imho.
Nice try AI... (Score:2)
Sure.... (Score:2)
I bet an AI wrote that report, just so that we would not worry about their nefarious plan to TAKE OVER THE WORLD!!!!
MWA HA HA!!!!
More new research has revealed that water is wet (Score:2)
Also: the sky is blue
Survey Says ... (Score:2)
New Research Reveals AI Lacks Independent Learning, Poses No Existential Threat (neurosciencenews.com)
But that's exactly what the AI's want you to think!
OMG, it's starting!!! It's almost here!! Head for the hills!!!!1!
Stochastic Parrot (Score:2)
There is no intelligence in AI.
It just regurgitates random stuff it reads on the internet. Eventually all of it's training data will come from AI generated content leading to a doom loop of gibberish.
"ML can only interpolate, not extrapolate" (Score:2)
Re: (Score:3, Interesting)
TBH I should clarify that a lot of normal people appear to also not have the I either.
(And remember Ford Prefect was given the suggestion to replace his brain with a very simple LLM.)
Re: (Score:2)
Exactly. People think that the AI must be 100% correct in order to beat humans for example in the job market, but I have literally written a few lines of script that gets executed on demand and that script replaced the work effort of one man, who got then fired as there was nothing else he could do. Developing AI is hard only because we demand so much more from it compared to the worst humans.