AI Luminaries Clash At Davos Over How Close Human-Level Intelligence Really Is (yahoo.com) 105
An anonymous reader shared this report from Fortune
The large language models (LLMs) that have captivated the world are not a path to human-level intelligence, two AI experts asserted in separate remarks at Davos. Demis Hassabis, the Nobel Prize-winning CEO of Google DeepMind, and the executive who leads the development of Google's Gemini models, said today's AI systems, as impressive as they are, are "nowhere near" human-level artificial general intelligence, or AGI. [Though the artilcle notes that later Hassabis predicted there was a 50% chance AGI might be achieved within the decade.] Yann LeCun — an AI pioneer who won a Turing Award, computer science's most prestigious prize, for his work on neural networks — went further, saying that the LLMs that underpin all of the leading AI models will never be able to achieve humanlike intelligence and that a completely different approach is needed... ["The reason ... LLMs have been so successful is because language is easy," LeCun said later.]
Their views differ starkly from the position asserted by top executives of Google's leading AI rivals, OpenAI and Anthropic, who assert that their AI models are about to rival human intelligence. Dario Amodei, the CEO of Anthropic, told an audience at Davos that AI models would replace the work of all software developers within a year and would reach "Nobel-level" scientific research in multiple fields within two years. He said 50% of white-collar jobs would disappear within five years. OpenAI CEO Sam Altman (who was not at Davos this year) has said we are already beginning to slip past human-level AGI toward "superintelligence," or AI that would be smarter than all humans combined...
The debate over AGI may be somewhat academic for many business leaders. The more pressing question, says Cognizant CEO Ravi Kumar, is whether companies can capture the enormous value that AI already offers. According to Cognizant research released ahead of Davos, current AI technology could unlock approximately $4.5 trillion in U.S. labor productivity — if businesses can implement it effectively.
Their views differ starkly from the position asserted by top executives of Google's leading AI rivals, OpenAI and Anthropic, who assert that their AI models are about to rival human intelligence. Dario Amodei, the CEO of Anthropic, told an audience at Davos that AI models would replace the work of all software developers within a year and would reach "Nobel-level" scientific research in multiple fields within two years. He said 50% of white-collar jobs would disappear within five years. OpenAI CEO Sam Altman (who was not at Davos this year) has said we are already beginning to slip past human-level AGI toward "superintelligence," or AI that would be smarter than all humans combined...
The debate over AGI may be somewhat academic for many business leaders. The more pressing question, says Cognizant CEO Ravi Kumar, is whether companies can capture the enormous value that AI already offers. According to Cognizant research released ahead of Davos, current AI technology could unlock approximately $4.5 trillion in U.S. labor productivity — if businesses can implement it effectively.
At least some of the actors are honest ... (Score:4, Insightful)
Obviously, LLMs are not and cannot be a path to AGI. The thing is, dumb humans (the average) may be dumber than an LLM in some respects, but these people are not using General Intelligence either. Hence being able to perform on the level of an average human is in no way a sufficient benchmark for the presence of AGI.
Also note that LLMs have no intelligence whatsoever. They are just statistical parrots. The illusion of intelligence comes from the actual real-world intelligence that went into their training data. They can, with low confidence, replicate a pale shadow of that and do some no-insight adaption (hence hallucinations). Kind of like a picture of the Mona Lisa replicates the actual picture. But nobody sane would think the camera or the photo-printer are great artists.
Re: (Score:3, Insightful)
Gee, I wonder if JoshuaZ might be receiving money for AI-related work... Really hard to get a man to believe something when his income depends on not believing it. (I don't remember whose quotation I'm mangling, and I don't trust any of the AIs to tell me.)
Actually I think the main problem is that we are mostly still thinking in terms of the distorted Turing Test. You should look at the original paper. I'm sure it's on the Internet somewhere.
So I'll go for funny and say we need to correct the test and then
Re: At least some of the actors are honest ... (Score:2)
Re: (Score:2)
The AI thinks Tay is most likely a rapper and the battle was one of his rap battles. I'm calling stupid on the AI and ignorance on myself. (I also considered if Tay might equal YOB, but couldn't square it. YOB = TACO.)
Care to clarify your reference?
However I'm more focused on the 30% of people who the surveys identify as wannabe authoritarian followers. Or freedom haters, if you prefer. Or just lazy, because being free and increasing your freedom require hard and unending work.
Time for an immigration joke?
Re: At least some of the actors are honest ... (Score:2)
Re: (Score:2)
So perhaps the AI-powered websearch didn't give that answer as some form of professional courtesy? "Houston, we have a problem with the AIs protecting each other."
Re: (Score:3, Interesting)
Re: (Score:1, Redundant)
Still sounds ad hominem to me. Assuming both of you are human.
But of the two of you, you actually sound more like the AI.
(What does a guy have to do to get a Funny mod around here? Oh yeah. Be funny. And then be ridiculously lucky to be seen by a moderator with a funny bone.)
Re: (Score:1)
Meh. When most people get mod points on slashdot, they tend to weaponize them by targeting people they simply don't like with negative moderation, or up-moderating any post that promotes their ideology, even if the rationale behind the argument promoting their ideology has obvious problems. For a classic example: Watch posts that talk about how cable was sold as being ad-free get up-moderated, even though none of the people making this claim can seem to recall exactly what channels they actually watched bac
Re: (Score:2)
Partial concurrence, but it also sounds like what an AI would say. Especially the length.
On the moderation aspects, mostly stronger concurrence, though I'm mostly bummed out because the summary says I got a Funny that was cancelled because the giver contributed to the discussion. (I sure hope he doesn't live in India?) In solution terms for moderation, I think requiring justification of negative mods might be a partial deterrent to that specific abuse.
Re: (Score:2)
But of the two of you, you actually sound more like the AI.
Come to think of, he does sound a bit like AI, doesn't he? My guess would be AI-"enhanced" not too smart human with a gigantic ego. The hallucinations, false claims and direct lies are a bit to frequent to just be AI.
Re: (Score:2)
Re: (Score:2)
It's funny because the same way "regular" people are polarised to political left/right, it seems that people who interact more with AI (tech, higher ed, gov, etc) are becoming as aggressively tribal and binary. Apparently there's no middle ground, it's either "AI is the best" or "AI is garbage", yet another example of the amplification of extremes in social media. FWIW I fully agree with your stance.
Re: (Score:2)
He posted that they weren't human level, as a contrast to what OpenAI and Anthropic are saying here.
It sounds like you would agree.
His characterization of the LLMs does not preclude the possibility of them having utility, but that OpenAI and Anthropic at least are misrepresenting their capabilities.
Folks are believing it too, I saw an email to a software sales organization telling them if a customer asks for software we haven't made, they can make it with Gemini (of all things...) without any coding skills
Re: (Score:2)
Folks are believing it too, I saw an email to a software sales organization telling them if a customer asks for software we haven't made, they can make it with Gemini (of all things...) without any coding skills and sell them the software that was generated... Which just doesn't make sense either way you slice it (either it can't do it and you've created a mess, or it can do it and why would they buy from you instead of doing it themselves)... But it sounds like a lot of the wild claims being thrown around among the hugely impractical level of investment....
Indeed. That is cult-like ignoring of reality. They seem to think they can finally get rid of coders completely, when all available evidence pretty mich says the opposite, including some substantiated claims that AI is making coder slower and puts more stress on them. It is also stupid on a more strategic level, because if you just "can make it with AI", who needs that software sales organization anymore? Classical shallow thinking at work.
At the same time, the only real improvements are AI really getting b
Re: (Score:2)
Re: (Score:2)
His entire second paragraph is claiming much more than that.
And everything he typed was entirely true.
Re: (Score:2, Interesting)
Nope. You do not get it at all. Also, your use of "we" is simply a repulsive attempt at aggrandizement, by pretending to be speaking for everyone. How pathetic.
Incidentally, I am a PhD level CS type and engineer. I think you may be overestimating your credentials and insights just a bit. Your understanding of text seems to be deficient as well and you like to make invalid claims about what others supposedly have said. Makes you a liar on top of you other failures. I, on the other hand, am doing actual resea
Re: (Score:2)
Nope. You do not get it at all. Also, your use of "we" is simply a repulsive attempt at aggrandizement, by pretending to be speaking for everyone. How pathetic.
Incidentally, I am a PhD level CS type and engineer. I think you may be overestimating your credentials and insights just a bit. Your understanding of text seems to be deficient as well and you like to make invalid claims about what others supposedly have said. Makes you a liar on top of you other failures. I, on the other hand, am doing actual research into AI (into its limits and failures, to be exact) at the moment, and what I have found up to now is even worse than what I expected.
But go on, hallucinate all you like. Just do not claim you could not have known later.
Gosh. Be careful your conviction doesn't skew the outcome of your research.
Re: (Score:2)
Nope. You do not get it at all. Also, your use of "we" is simply a repulsive attempt at aggrandizement, by pretending to be speaking for everyone. How pathetic.
Admittedly we is an obnoxious phrasing on my part. But given that someone else replied making essentially the same point, it does seem like there's definitely more than one person here who gets exactly what is going on with you. And frankly, the saddest thing is that for things which aren't AI related, your comments are often nuanced and thoughtful. You've just gotten absolutely mindkilled on this one issue!
Incidentally, I am a PhD level CS type and engineer. I think you may be overestimating your credentials and insights just a bit.
I'm not sure what "PhD level CS type" is supposed to mean here, because you previously claimed https [slashdot.org]
Re: (Score:2)
JoshuaZ, you are trying too hard to name drop Tao. You should definitely read his more recent opinions on AI, you will find that he's not very impressed with their capabilities today, he considers them mostly trivial. Extrapolating the trend, he'll soon call them time wasters, I predict.
That said, Terry's opinion is irrelevant. His specialty lies elsewhere. If you wish to name drop a relevant name on this topic, try Yann LeCun. He's been in the industry since way before Geoffrey Hinton got lucky with his
Re: (Score:2)
Re: (Score:2)
Early adoption doesn't translate into enthusiastic regular use forever.
Tao, like Tim Gowers, is excited about the possible uses of LLMs and the potential of automatic theorem proving, and have been willing to experiment early and often. They are also excited about the potential of large scale collaborative theorem proving projects with thousands of (human) participants, like the Polymath project. Their aim is to modernize the tried and true methodology of mathematics, not cheerlead a single tool.
It is mi
Re: (Score:2)
Early adoption doesn't translate into enthusiastic regular use forever.
Sure. It may well be that even if the LLM is helping produce genuinely novel results that the way it does so is so inefficient and unreliable that just working things out yourself is better. This is about where I've come down in terms of most math research for at least the free LLMs, aside from the fact that part of why I do math is because its fun, so why let the machine have fun?
Tao, like Tim Gowers, is excited about the possible uses of LLMs and the potential of automatic theorem proving, and have been willing to experiment early and often. They are also excited about the potential of large scale collaborative theorem proving projects with thousands of (human) participants, like the Polymath project. Their aim is to modernize the tried and true methodology of mathematics, not cheerlead a single tool.
Sure. No disagreement there. I'm not sure though why you think that point is relevant.
It is mildly concerning that you prefer to point at some experiences of non-expert users in preference to the statements of scientists who literally founded the technology and presided over the breakthroughs. The latter group are much more credible if your aim is to convince your readership.
Hinton and LeCunn would be relevant expert
Re: (Score:2)
I do think that if the question is "can LLMs produce genuinely novel and useful content in mathematics or the sciences", then that is a question for expe
Re: (Score:2)
You would know this if you read his posts and watched his interviews. He's interested in anything (including Polymath) that can bring fast progress in his field, which is why he tried LLMs with an open mind before deciding they are disappointing.
So, I'm not sure where he's said that, and I'd be curious to see a citation. But note that even if it is correct, it also isn't relevant to the central point here: Gweihir claimed that LLMs are useless for math and cannot do any original math. That Tao used them successfully to do math is what matters here. If he decided afterwords that this was less efficient than just doing things himself (which wouldn't at all surprise me) that's still a distinct claim. That would in fact be pretty close to my own posit
Re: (Score:2)
I'm not sure what "PhD level CS type" is supposed to mean here
Are you dumb? It means I have a CS PhD. What else could it mean? And yes, I was one (!) exam away from a Math Bachelor's degree as well, but did not see any point in doing Analysis III.
Re: (Score:2)
Re: (Score:1, Troll)
I think you've become a bit unhinged with the whole AI thing. Since when are we defining "General Intelligence" in a way that it's something that the average person "doesn't use"? Says who? You? It makes you sound like an elitist dehumanizing prick. Don't make it a skill issue, as that's a losing battle. And talking about statistical parrots, you're also acting like one, as at every AI story you'll parrot the same viewpoint, no matter the data presented. Maybe you can use that General Intelligence of yours
Re: (Score:2)
My dog won't shit on my property. I.e he doesn't shit where he eats.
Your dog probably eats shit [pdsa.org.uk] - you may love him to bits, but he's a bad example. He doesn't do things you don't like, that's not because of whatever you project to him. Regarding dropping garbage, I see a lot of lowlifes do that, and skin colour is not the indicator (I live in Scotland, which is multi-coloured in big cities). As I read it somewhere, it might be "I don't give a fuck about the world since the world doesn't give a fuck about me" attitude, which I think it correlates with the kind of people that
Re: (Score:2)
Re: (Score:2)
Exactly, besides the definition slippery slope, there's also the brain chemicals that cause emotions and make people do all sorts of irrational things, and appear as utterly unintelligent. As long as we equate "intelligence" as smarts, AI will *appear* to move faster to AGI because AI can appear to be smart and people frequently appear to be dumb.
Re:At least some of the actors are honest ... (Score:4, Insightful)
I think G has a point about humans and intelligence.
My dog is functionally more intelligent than many humans I've seen.
For example, he will move when a car is coming, smarter than some of the joggers I've nearly run over around here, who, not all the time, but have often made a point of blocking the passage of cars when it isn't a problem for them to move over.
The dog's sense of self preservation makes him smarter than "some of the joggers".
My immigrant neighbours from Africa drop their garbage where ever they stand. The parking lot is filled with McDonalds and Tim Hortons garbage since they moved in. Their front lawns have bald tires and old furniture piled up, until by-law finally comes by and issues them a ticket.
My dog won't shit on my property. I.e he doesn't shit where he eats. That is a rational and intelligent behaviour. Unfortunately, functionally, that makes him more intelligent than the average African immigrant around here.
Also, let me say a Kindly Fuck You to anyone who says that is racist, because it is just reality. No white people around here just dump their garbage where they stand. My dog instinctively doesn't either. It is what it is.
I could be wrong, but I suspect even replacing all the Africans in your neighbourhood with white people is not going to make you much happier.
Re: (Score:2)
Re: (Score:2)
Look at the definition of General Intelligence and then observe how many people can only use their mental skills in narrow areas and completely fail in some other areas. That type of sill is missing the "General" in General Intelligence.
Incidentally, you statement nicely illustrates you are on one of those so limited, because you have no actual rational arguments.
Re:At least some of the actors are honest ... (Score:4, Insightful)
Exactly, a human doesn't need to be shown 1 million examples of a coke bottle and "not coke bottle" in order to recognize it. Furthermore we don't require gigawatt data centers, we do inference and training with just 20 watts.
We are at least 50 years from AGI, and even that is if humanity put a major sustained effort ..10 times current amounts .. into developing it. By AGI I mean a robot that can walk into any existing home, rewire it, and fix the plumbing or do kitchen remodeling. We are at least 15 years away from even a tele-operated robot that can do such a thing btw, mainly due to the hand dexterity requirement and also the autonomous safety brain needed (how does a tele-operated robot climb stairs and also avoid crashing into things).
Re: (Score:3)
We are at least 50 years from AGI, and even that is if humanity put a major sustained effort
I don't know how you can possibly estimate that with any degree of certainty.
Re:At least some of the actors are honest ... (Score:4, Insightful)
That one is simple: Practical application of a tech is typically something like 30-50 years removed from an early lab demo showing feasibility. CS/IT thinks it is special and faster, but a look at actual developments shows it is not. Now, a first lab demo is usually something like 10-200 years removed from a credible theory how it could be done.
We do not have that "credible theory" for AGI at all. In fact we have no clue how some (certainly not all) humans can do General Intelligence. Hence no lab demo and hence a lower bound of 40 years if we find the theory tomorrow. Add how long people have been trying to find thar theory with zero success, and 50 years is a reasonable lower bound. A more realistic view would be "ask again in 100 years" or "likely impossible to do".
Re: (Score:2)
That one is simple: Practical application of a tech is typically something like 30-50 years removed from an early lab demo showing feasibility.
That seems like an unsubstantiated claim, but one worth researching.
Re: (Score:2)
The inverse claim is the one that needs the burden of proof applied.
Technology doesn't spring into existence because of narrative convenience, not even if a lot of rich people have made plans relying on it.
Re: (Score:2)
That one is simple: Practical application of a tech is typically something like 30-50 years removed from an early lab demo showing feasibility. CS/IT thinks it is special and faster, but a look at actual developments shows it is not. Now, a first lab demo is usually something like 10-200 years removed from a credible theory how it could be done.
I'm not sure where you are getting the 30 to 50 year claim. For many major physical techs, the time is much shorter. The Wright flyer was 1903. 8 years later there is already aircraft used in war in the Italo-Turkish War. In particular, incentive structures matter a lot here. If there's an incentive to get a tech out of a lab, it goes a lot faster. And if we had AGI, the incentives for investors and governments to get it out quickly would be pretty clear. And yes, for CS it often is much faster, since the
Re: (Score:3)
Hence being able to perform on the level of an average human is in no way a sufficient benchmark for the presence of AGI.
Actually, I think that's more or less how AGI is often defined [wikipedia.org].
Re: (Score:2)
There are definitions of AGI that ignore what General Intelligence means, yes. These just do not make sense.
Re: (Score:2)
The thing is, dumb humans (the average) may be dumber than an LLM in some respects,
I'm dumber than a calculator in some respects.
Re: (Score:2)
You are not. Given what a calculator "knows", you could do all that as well, easily. What you are is "slower" than a calculator in that the amount of simplistic steps it does to get to a result would take you a lot longer to do.
Re: (Score:2)
AGI would be something that can do pretty much everything humans as a group can do. So it could move a robot body as well as humans, it could come up theories in physics like Einstein, it can solve math problems, invent new painting styles like Picasso or write a good and entertaining book. Most likely there is some limit, like doing 500 different things we can measure, just so we can test and verify it, but I think that most people would be happy with the results and if not, we could just add more tests.
Ev
Re: (Score:2)
Pretty much.
The key thing is that General Intelligence is universal. It can be applied to any question and be successful in understanding, modelling, exploring and then solving it, only constrained by complexity of the solution space. "General" and "low IQ" is possible and the smallest form of AGI would be well below an average human in IQ, but wayyyyy above in universal applicability. The thing is, most humans can use what intelligence they have only in narrow spaces and are unable to use it generally. Hum
Re: (Score:2)
I see this as a rich-get-richer scenario. Smart people, the ones who can outthink statistical parrot, will be able to use its speed at processing and digesting massive quantities of data to improve their productivity. People who can't outthink the things will have to use them *credulously*, and thus become functionally dumber than ever.
but my money printing bubble depends on (Score:2)
Not a huge shock (Score:3)
Finally some sanity, let's also compare efficiency (Score:3)
They're not thinking this thing through (Score:4, Insightful)
If AI can eliminate that much work (a big if), then the massive layoffs will tank the economy, and their stock will go down.
You're underthinking it. (Score:2)
If AI can eliminate that much work (a big if), then the massive layoffs will tank the economy, and their stock will go down.
They will have control of the deployment which means they will be able to easily make themselves the richest of the rich by correctly choosing and shorting rival companies that are about to be obliterated by their own deployments. They will use this power not to simply enrich themselves but to become the richest of the richest, the very top of the 0.1%. They can use this wealth to insulate themselves as they slowly take control of the economy and in-turn the government.
The question you should be asking is i
Re: (Score:3)
That's the whole point you seem to be missing. If money isn't being circulated it's worthless, and there is no economy. At all. It's a self killing thing, It. Won't. Happen.
It doesn't matter who has the biggest numbers of worthless stuff. There will be no economy for them to spend it in. There will be no more production. Of anything. It's not even like the rich would have physical things, mostly it's all just numbers in a digital bank account. Numbers on paper can't buy anything if nothing is for sale. Comp
Re: (Score:2)
There will be no economy for them to spend it in. There will be no more production. Of anything.
Incorrect. The government will still be spending money because they will need stuff and be able to print more money which is why the company that makes true AI will gain control of the government. As such, they will have people doing what machines have yet to be able to master.
The outcome will be like that of the short story Manna.
Aww, not this shite again... (Score:2)
The real takeaway here is the same it was last year, the year before that and the 54 years before that - any public figure that is on a tax payroll has no fucking business at the private party of the sociopaths of the world in Davos.
The job of our elected representatives is at their public office dealing with the problems of their country.
Anyone is welcome to travel to Davos or similar on their own dime when they are no longer holding a government office.
Hilarious - it's just a stochastic program (Score:3, Informative)
LLMs are a big matrix of coefficients and filter formulas. It's an elaborate multi-dimentional version of a Markov chain.
They are on crack (Score:4, Interesting)
Altman and Amodie are either morons, crazy, or (most likely) doing the Big Lie thing. There's not the slightest bit of actual intelligence anywhere near today's LLM's, and I don't see it happening in my grandchildrens' lifetime.
They are Language Models, not intelligent "agents".
Re: They are on crack (Score:2)
I dont know anything about Amodie, but to me Altman has always come across as skating too close and sometimes crossing the genius-madman line.
Re: (Score:2)
Altman strikes me more as one of those 'genius-adjacent' types, not a genius themselves but knows how to exploit and market others' genius.
Re: (Score:2)
Re: (Score:2)
I don't know the Sam personally, but I have no doubt he's a genius-level marketer, just not a genius-level technologist. The guy sells visions better than Nostradamus did. You don't raise $64B at a $500B valuation on $18B in revenue while losing a multiple of that unless you are a certified guru-level Jedi master storyteller. The guy definitely warps reality with his narratives, and I give him a well-earned Marketing Genius stamp for that. So extremely smart, but not in an Albert Einstein way, more in a P.T
Re:They are on crack (Score:5, Interesting)
I wouldn't describe them as morons.. Altman is more of a shifty used car salesman, compulsive liar, type, maybe of average intelligence. Amodei is quite smart, but the money/power lust seems to have got to him, and in the last year he has jumped the shark and will now seemingly say anything and everything he can to hype AI.
There was an interesting interview of Hassabis at Davos by
Alex Kantrowitz (quite underrated as an interviewer - asks deceptively simple questions, and gets the guests to talk), where Hassabis, obviously no fan of Altman, calls him out by noting why does ChatGPT need to add advertizing (coming soon) if OpenAI really is on the cusp of massively valuable world altering AGI!
Hassabis is the only sane voice in the entire AI field.
Sane voices in AI (Score:3)
Also check out Gary Marcus. His keynote at AGI-24 was pretty entertaining, and he also co-authored an interesting paper with the late Doug Lenat shortly before he passed.
https://arxiv.org/abs/2308.044... [arxiv.org]
Re: (Score:2)
I agree with you, LLMs are not "intelligent", they mimic intelligence, and that's a fundamental limitation of autoregressive generation. The more interesting approach to me are diffusion models, particularly video diffusion models. I don't have a Nobel prize, but to me, those exhibit patterns that I'd be more willing to associate with AGI, at least from the perspective of understanding causality and the world it's operating in. The problem is diffusion models are very slow and don't play the quantization ga
Grifers (Score:2)
No surprises here (Score:3)
Follow the money.
Google, whose main business is, and has always been, advertising, views AI of any kind as a tool for that business. They have to adopt a realistic view of it, lest they run into trouble with shareholders.
AI companies, like Anthropic and OpenAI, do nothing but AI, and they have to view it as the be-all and end-all of human accomplishment, or they won't have any investors.
Both are simply promoting shareholder value in the best way they know how.
Note that fact and truth do not enter into this equation in any way.
This is not a clash... (Score:4, Insightful)
These points of view are not in opposition. They're just using different definitions of general intelligence.
Yann and Demis correctly point out that there is no apparent path with our current approach to self-awareness, proactive intelligence, or truly novel thinking. To curiosity.
Dario and Sam correctly point out that the models are already at least as intelligent as most people, are rapidly improving, and are giving humans superpowers. But they can't and won't be able to operate truly independently - _someone_ is steering and overseeing them.
The headline doesn't make sense - there's no clash here. They're just defining things differently.
Re:This is not a clash... (Score:5, Insightful)
> They're just using different definitions of general intelligence
True.
> Dario and Sam correctly point out that the models are already at least as intelligent as most people
But that's not true.
LLMs are kind of like idiot savants - great at some things, and piss-poor at others. Even in the things that they are great at, showing flashes of human level or expert intelligence, they are at the same time deeply flawed in ways that humans of average intelligence are not, continuing to hallucinate, and not understanding when their regurgitated human reasoning patterns actually apply or not. They give advice without thinking thought the consequences, then just say "my bad" and move on if you are knowledgeable enough to call them out.
There are also huge gaps in LLM capability that even the most stupid of humans don't have, such as on the job learning. If you show a stupid human how to do something simple, enough times, they will eventually get it, and memorize the skill (flipping burgers, or bagging groceries), but the LLM never will. It may learn "in context" one day, but the next day, and the next, will be groundhog day when you have to teach it all over again. For any reasonably skilled job that takes months or years to learn and master, this having to train it every day is a non-starter.
Altman's definition of AGI is something that can perform most "economically valuable jobs", such as his own used-car salesman job, but we are still far from that. About the only job than an LLM could do today, where it would be a viable replacement for a human, would be a call center job where it is doing something highly repetitive and non-creative, working in a narrow domain that it could have been trained to master, where it oes not need to learn anything, and where the consequences of messing up are fairly low (and chances of doing so maybe no worse than the humans it is replacing).
Re: (Score:2)
While LLM's aren't likely to be THE thing to finally crack AGI, they are a crucial part of future research into it. We will need an interface, and language is OUR interface. That being said, a lot of your thinking and reasons are flawed.
>they are at the same time deeply flawed in ways that humans of average intelligence are not
I think you are vastly over estimating "average" human intelligence.
>continuing to hallucinate, and not understanding when their regurgitated human reasoning patterns actually a
Re:This is not a clash... (Score:4, Insightful)
"AI models would replace the work of all software developers within a year"
Unless there's a secret thing beyond the Claude Opus they are selling, they are no where near being able to claim this. This is undeniably a claim for human level intelligence, and just yesterday Opus wasn't even able to understand even how to properly invoke async functions in python (confusion about async for versus await).
"we are already beginning to slip past human-level AGI"
Sam is claiming not only is at as smart as humans, that it is smarter.
"at least as intelligent as most people,"
It isn't though. It can pull a lot of words from it's corpus in confident and convincing looking combinations. If you give it a search engine and ask it to summarize the results, it can look very up on current events (though it still will say some really stupid stuff). This has utility, but it's kind of like saying a calculator is 'more intelligent' than most humans because it can do math quickly.
Re:This is not a clash... (Score:4, Interesting)
Sam is doing something more insidious. He's describing humans as human-level AGI. That's planting into the zeitgeist the notion that there's no distinction between human beings and LLMs, only a matter of degree of capability. When enough sam-bots repeat that notion and use it in daily conversation, it becomes the default position that must be overcome.
It's like saying that current horses surpass human level strength on people benchmarks. It implies the horse is a kind of person and can already beat them in pure strength, so if you wait a few more years you might find the next generation of horses will be opening restaurants and investing in rare earth mining operations.
At the WEF? (Score:3)
How Close Human-Level Intelligence Really Is
Nowhere in sight.
Re: (Score:2)
Came here to say exactly this, but you beat me to it.
In other news... (Score:3)
Re: (Score:2)
When you use a metaphor, follow it up with a simple explanatory sentence. Your writing will be more clear, and your thinking less muddy.
We have to know how the brain works first (Score:2)
We won't have human like intelligence until we can model the human brain with some degree of accuracy, and we don't know enough about the human to do that yet.
There may be other kinds of 'intelligence' that can serve. If they were discovered it would most likely be from experimenting with artificial evolution, but I don't see that happening anytime soon.
Re: (Score:2)
We won't have human like intelligence until we can model the human brain with some degree of accuracy, and we don't know enough about the human to do that yet.
This doesn't necessarily follow. Airplanes don't fly by flapping their wings. One doesn't need to understand how birds fly in order to make airplanes. And even before airplanes, humans made lighter than air balloons and airships well before we understood how birds fly. In that case, it turned out that making mechanical objects that fly the way birds fly is pretty tough to do efficiently.
Re: (Score:2)
Airplanes don't fly by flapping their wings. One doesn't need to understand how birds fly in order to make airplanes.
One does have to understand the same basic principles behind how birds can fly in order to make effective airplanes. They also use a curved wing, push against the air and so on.
Re: (Score:3)
Re: (Score:1)
This is a bit nit-picky on my part, but I specifically said 'human like intelligence', not intelligence in general, whatever that might be. Regarding your analogy, I would say that airplane flight is not 'bird-like flight'.
Also, just to even more nit-picky, analogies do not constitute proof.
Re: (Score:2)
Does it really matter? (Score:1)
At this point it's just semantics.
LLM tech at the current level is incredibly useful and it's not slowing down.
People like to obsess over if the LLM "thinks" like a human but then again, humans tend to see the world only as a human sees it.
Truth is, in many areas a good LLM can easily outperform a human.
I'm spending just hours on projects that before LLMs I would have spent weeks just learning syntax and finding obscure bugs.
Regardless of the nomenclature, LLMs are definitely a paradigm shift for humanity.
Re: (Score:2)
LLMs have utility, but they aren't having the sort of utility that Anthropic and OpenAI are stating.
They have a number of executives *convinced* they are just a couple of months away from being able to just prompt their way to the software they want and lay off anyone that could actually fix the problems. They've been a few months away from doing this since 2024. I saw one of these 'non-coders' article just last month and showed off their result... which didn't do what he asked it to make, and was glitchy
Like all the other conferences in last 60 years (Score:1)
As someone who is ordered to use "Claude"... (Score:4, Insightful)
To me, LLMs are nice as a method of exploring documentation in way that is fast-but-superficial, and Claude is ok for doing boring code re-formatting or test-case-writing stuff, but nothing much beyond that... at least when the quality and correctness of the result matters. Which also explains why so many pixel-pushers are happy with Claude - for them correctness wasn't much of a concern even before the advent of LLMs.
What I find really scary is the speed at which people atrophy their brains when habitually using LLMs, this seems to be no less dramatic than what happens when people stop walking and use a mobility scooter for every distance.
Re: (Score:3)
The atrophy is real. This is a HUGE issue. Vivienne Ming has written on and studied this quite a lot - she's worth a follow
Re: (Score:2)
Having said that, on a lark a few nights ago, I asked it how I would implement biphasic stimulation for a cochlear implant, and it pointed me to H-bridges, which was as surprising as useful,
Intelligence != knowledge (Score:3)
Knowledge is an aspect of intelligence, but having it is not intelligence itself. And LLMs are "knowledgeable" in the sense that they have at their disposal vast datasets of human-compiled information. This is not in doubt.
In the sense that a machine can, through various algorithms, look up that information and through a statistical model, produce output that simulates understanding of the meaning of that knowledge, that can constitute "intelligent" behavior. Because its ability to retain and recall information is superior to humans both in speed and breadth, it can do things with that information that, in comparison, humans cannot do. Again, this is not controversial. Prior to LLMs, such capabilities have been true of many specialized domains--mathematical computation, chess, search engines. LLMs simply constitute a new model for information synthesis and retrieval.
Can these algorithms generate "new" truths (where by "new" we mean things that humans previously did not know)? Yes. But these truths are found by extensive analysis and synthesis of the knowledge present in the training data, through a computational process that is only in some ways more efficient than a human.
Does this behavior constitute "intelligence?" In some limited respects, yes, but it cannot be said that it is anything like human intelligence. The latter is far more nuanced. Human intelligence is inextricably entertwined with understanding, emotion, creativity, imagination, passion, will, desire. It is a disservice to ourselves and to that which we create to believe that LLMs could ever achieve this kind of intelligence. We may be able to get it to simulate aspects of it in some fashion, but as long as we do not understand the origin of consciousness, it will never be truly capable of human intelligence.
Whether we as a species should be pursuing the development of an artificial consciousness is a separate question.
Re: (Score:2)
Explicitly questioned, I have been assured by my DDG.ai that it cannot produce "new" truths. It can ( I imagine ) only repeat its training data or multiplexes thereof; depends on the prompt. Or perhaps DDG.ai runs cheap LLMs.
I am ignoring the "trivial" case(s) of knowing how many "splines" dissect the 15-th oak-leaf fallen from the tree 30 yards N. from your house.
What does "human level intelligence" mean? (Score:3)
People are largely idiots. They spend their time consuming media and trying to do selfish things for their enjoyment, while serving as wage slaves to corporations, where most of them do nothing of actual importance. AI hallucinations are nothing compared to humans' stupid beliefs. Even academia is filled with bad research and some outright fraud.
So it's not clear to me what is this "human level intelligence" that people want AI to aspire to.
Animal/human cognition versus artificial cognition (Score:2)
This is nascent thinking in need of further elaboration and development:
Evolution selectively adapts biological organisms to fit well into environments over time
This is a natural design process
Natural design has created some amazing biological mechanisms with capabilities that outperform artificial mechanisms when all tradeoffs are considered (including *ilities and efficiencies)
One of the things that natural design has created is a biological entity capable of artificial design
Artificial design uses
Funny, funny, nowhere (Score:2)
And not a drop to drink?
LLMs are intelligent (Score:2)
It all depends on your definition of intelligence. I use LLMs daily, for various tasks. They have obvious limitations, but used for what they are good at, wow.
My neighbor, Joe Sixpack, is also good at some things, and also has obvious limitations. Overall, honestly, the LLM is more useful.
How do you want to define intelligence? Knowledge - LLMs win. Ability to discuss abstract philosophy - LLMs win. Ability to solve a quartic equation (did this yesterday) - LLMs win. Apparent sentience - LLMs win. Abili
Nobody has bought up (Score:2)
On this post that the people who are saying that we will have AGI in 2 years are people in c-suites. These people have a vested interest to keep investing dollars inside of their organization. As soon as you have a few of them saying we can't do AGI most investing in the AI research will cease. There is money in ai and the CEOs know it but it's not going to come in the form of AGI. These people are smart enough to realize that they can't come up with the AGI but they can come up with the way to sell AGI and
This sentence right here: (Score:2)
The debate over AGI may be somewhat academic for many business leaders. The more pressing question, says Cognizant CEO Ravi Kumar, is whether companies can capture the enormous value that AI already offers.
I challenge someone to provide actual proof of this supposed enormous value. Thus far we've seen a gigantic amount of money slung at AI, for it to have the ability to sort of summarize reports, sometimes code things a little quicker than humans but in ways that require humans to expend tremendous effort sorting out the chaff after the fact, and capable of creating tremendous hallucinations not only within themselves, but within the C-suites of every company that makes more than a few thousand dollars a year
The quicker (Score:1)