Forgot your password?
typodupeerror
AI

AI Luminaries Clash At Davos Over How Close Human-Level Intelligence Really Is (yahoo.com) 105

An anonymous reader shared this report from Fortune The large language models (LLMs) that have captivated the world are not a path to human-level intelligence, two AI experts asserted in separate remarks at Davos. Demis Hassabis, the Nobel Prize-winning CEO of Google DeepMind, and the executive who leads the development of Google's Gemini models, said today's AI systems, as impressive as they are, are "nowhere near" human-level artificial general intelligence, or AGI. [Though the artilcle notes that later Hassabis predicted there was a 50% chance AGI might be achieved within the decade.] Yann LeCun — an AI pioneer who won a Turing Award, computer science's most prestigious prize, for his work on neural networks — went further, saying that the LLMs that underpin all of the leading AI models will never be able to achieve humanlike intelligence and that a completely different approach is needed... ["The reason ... LLMs have been so successful is because language is easy," LeCun said later.]

Their views differ starkly from the position asserted by top executives of Google's leading AI rivals, OpenAI and Anthropic, who assert that their AI models are about to rival human intelligence. Dario Amodei, the CEO of Anthropic, told an audience at Davos that AI models would replace the work of all software developers within a year and would reach "Nobel-level" scientific research in multiple fields within two years. He said 50% of white-collar jobs would disappear within five years. OpenAI CEO Sam Altman (who was not at Davos this year) has said we are already beginning to slip past human-level AGI toward "superintelligence," or AI that would be smarter than all humans combined...

The debate over AGI may be somewhat academic for many business leaders. The more pressing question, says Cognizant CEO Ravi Kumar, is whether companies can capture the enormous value that AI already offers. According to Cognizant research released ahead of Davos, current AI technology could unlock approximately $4.5 trillion in U.S. labor productivity — if businesses can implement it effectively.

This discussion has been archived. No new comments can be posted.

AI Luminaries Clash At Davos Over How Close Human-Level Intelligence Really Is

Comments Filter:
  • by gweihir ( 88907 ) on Saturday January 24, 2026 @03:42PM (#65946854)

    Obviously, LLMs are not and cannot be a path to AGI. The thing is, dumb humans (the average) may be dumber than an LLM in some respects, but these people are not using General Intelligence either. Hence being able to perform on the level of an average human is in no way a sufficient benchmark for the presence of AGI.

    Also note that LLMs have no intelligence whatsoever. They are just statistical parrots. The illusion of intelligence comes from the actual real-world intelligence that went into their training data. They can, with low confidence, replicate a pale shadow of that and do some no-insight adaption (hence hallucinations). Kind of like a picture of the Mona Lisa replicates the actual picture. But nobody sane would think the camera or the photo-printer are great artists.

    • Re: (Score:1, Troll)

      by fleeped ( 1945926 )

      I think you've become a bit unhinged with the whole AI thing. Since when are we defining "General Intelligence" in a way that it's something that the average person "doesn't use"? Says who? You? It makes you sound like an elitist dehumanizing prick. Don't make it a skill issue, as that's a losing battle. And talking about statistical parrots, you're also acting like one, as at every AI story you'll parrot the same viewpoint, no matter the data presented. Maybe you can use that General Intelligence of yours

      • by gweihir ( 88907 )

        Look at the definition of General Intelligence and then observe how many people can only use their mental skills in narrow areas and completely fail in some other areas. That type of sill is missing the "General" in General Intelligence.

        Incidentally, you statement nicely illustrates you are on one of those so limited, because you have no actual rational arguments.

    • by backslashdot ( 95548 ) on Saturday January 24, 2026 @04:32PM (#65946950)

      Exactly, a human doesn't need to be shown 1 million examples of a coke bottle and "not coke bottle" in order to recognize it. Furthermore we don't require gigawatt data centers, we do inference and training with just 20 watts.

      We are at least 50 years from AGI, and even that is if humanity put a major sustained effort ..10 times current amounts .. into developing it. By AGI I mean a robot that can walk into any existing home, rewire it, and fix the plumbing or do kitchen remodeling. We are at least 15 years away from even a tele-operated robot that can do such a thing btw, mainly due to the hand dexterity requirement and also the autonomous safety brain needed (how does a tele-operated robot climb stairs and also avoid crashing into things).

      • We are at least 50 years from AGI, and even that is if humanity put a major sustained effort

        I don't know how you can possibly estimate that with any degree of certainty.

        • by gweihir ( 88907 ) on Sunday January 25, 2026 @10:16AM (#65947858)

          That one is simple: Practical application of a tech is typically something like 30-50 years removed from an early lab demo showing feasibility. CS/IT thinks it is special and faster, but a look at actual developments shows it is not. Now, a first lab demo is usually something like 10-200 years removed from a credible theory how it could be done.

          We do not have that "credible theory" for AGI at all. In fact we have no clue how some (certainly not all) humans can do General Intelligence. Hence no lab demo and hence a lower bound of 40 years if we find the theory tomorrow. Add how long people have been trying to find thar theory with zero success, and 50 years is a reasonable lower bound. A more realistic view would be "ask again in 100 years" or "likely impossible to do".

          • That one is simple: Practical application of a tech is typically something like 30-50 years removed from an early lab demo showing feasibility.

            That seems like an unsubstantiated claim, but one worth researching.

            • The inverse claim is the one that needs the burden of proof applied.

              Technology doesn't spring into existence because of narrative convenience, not even if a lot of rich people have made plans relying on it.

          • That one is simple: Practical application of a tech is typically something like 30-50 years removed from an early lab demo showing feasibility. CS/IT thinks it is special and faster, but a look at actual developments shows it is not. Now, a first lab demo is usually something like 10-200 years removed from a credible theory how it could be done.

            I'm not sure where you are getting the 30 to 50 year claim. For many major physical techs, the time is much shorter. The Wright flyer was 1903. 8 years later there is already aircraft used in war in the Italo-Turkish War. In particular, incentive structures matter a lot here. If there's an incentive to get a tech out of a lab, it goes a lot faster. And if we had AGI, the incentives for investors and governments to get it out quickly would be pretty clear. And yes, for CS it often is much faster, since the

    • Hence being able to perform on the level of an average human is in no way a sufficient benchmark for the presence of AGI.

      Actually, I think that's more or less how AGI is often defined [wikipedia.org].

      • by gweihir ( 88907 )

        There are definitions of AGI that ignore what General Intelligence means, yes. These just do not make sense.

    • The thing is, dumb humans (the average) may be dumber than an LLM in some respects,

      I'm dumber than a calculator in some respects.

      • by gweihir ( 88907 )

        You are not. Given what a calculator "knows", you could do all that as well, easily. What you are is "slower" than a calculator in that the amount of simplistic steps it does to get to a result would take you a lot longer to do.

    • by dvice ( 6309704 )

      AGI would be something that can do pretty much everything humans as a group can do. So it could move a robot body as well as humans, it could come up theories in physics like Einstein, it can solve math problems, invent new painting styles like Picasso or write a good and entertaining book. Most likely there is some limit, like doing 500 different things we can measure, just so we can test and verify it, but I think that most people would be happy with the results and if not, we could just add more tests.

      Ev

      • by gweihir ( 88907 )

        Pretty much.

        The key thing is that General Intelligence is universal. It can be applied to any question and be successful in understanding, modelling, exploring and then solving it, only constrained by complexity of the solution space. "General" and "low IQ" is possible and the smallest form of AGI would be well below an average human in IQ, but wayyyyy above in universal applicability. The thing is, most humans can use what intelligence they have only in narrow spaces and are unable to use it generally. Hum

    • by hey! ( 33014 )

      I see this as a rich-get-richer scenario. Smart people, the ones who can outthink statistical parrot, will be able to use its speed at processing and digesting massive quantities of data to improve their productivity. People who can't outthink the things will have to use them *credulously*, and thus become functionally dumber than ever.

  • marks thinking it can - won't some one think of the Tech Bro's?
  • by liqu1d ( 4349325 ) on Saturday January 24, 2026 @03:49PM (#65946868)
    CEOs that benefit the most from selling the dream believe in the dream, at least at face value anyway.
  • The human brain uses a fraction of the energy consumed by the current method of mimicking AI. How about we press these over-hyped companies to fix their horribly inefficient use of energy. They be restricted from expansion until they reduce energy waste by a magnitude of 10 for the same prompt, then get back to us.
  • by marcle ( 1575627 ) on Saturday January 24, 2026 @04:00PM (#65946894)

    If AI can eliminate that much work (a big if), then the massive layoffs will tank the economy, and their stock will go down.

    • If AI can eliminate that much work (a big if), then the massive layoffs will tank the economy, and their stock will go down.

      They will have control of the deployment which means they will be able to easily make themselves the richest of the rich by correctly choosing and shorting rival companies that are about to be obliterated by their own deployments. They will use this power not to simply enrich themselves but to become the richest of the richest, the very top of the 0.1%. They can use this wealth to insulate themselves as they slowly take control of the economy and in-turn the government.

      The question you should be asking is i

      • That's the whole point you seem to be missing. If money isn't being circulated it's worthless, and there is no economy. At all. It's a self killing thing, It. Won't. Happen.

        It doesn't matter who has the biggest numbers of worthless stuff. There will be no economy for them to spend it in. There will be no more production. Of anything. It's not even like the rich would have physical things, mostly it's all just numbers in a digital bank account. Numbers on paper can't buy anything if nothing is for sale. Comp

        • There will be no economy for them to spend it in. There will be no more production. Of anything.

          Incorrect. The government will still be spending money because they will need stuff and be able to print more money which is why the company that makes true AI will gain control of the government. As such, they will have people doing what machines have yet to be able to master.

          The outcome will be like that of the short story Manna.

  • The real takeaway here is the same it was last year, the year before that and the 54 years before that - any public figure that is on a tax payroll has no fucking business at the private party of the sociopaths of the world in Davos.

    The job of our elected representatives is at their public office dealing with the problems of their country.

    Anyone is welcome to travel to Davos or similar on their own dime when they are no longer holding a government office.

  • by OrangeTide ( 124937 ) on Saturday January 24, 2026 @04:11PM (#65946918) Homepage Journal

    LLMs are a big matrix of coefficients and filter formulas. It's an elaborate multi-dimentional version of a Markov chain.

  • They are on crack (Score:4, Interesting)

    by kschendel ( 644489 ) on Saturday January 24, 2026 @04:28PM (#65946942) Homepage

    Altman and Amodie are either morons, crazy, or (most likely) doing the Big Lie thing. There's not the slightest bit of actual intelligence anywhere near today's LLM's, and I don't see it happening in my grandchildrens' lifetime.

    They are Language Models, not intelligent "agents".

    • I dont know anything about Amodie, but to me Altman has always come across as skating too close and sometimes crossing the genius-madman line.

      • by Junta ( 36770 )

        Altman strikes me more as one of those 'genius-adjacent' types, not a genius themselves but knows how to exploit and market others' genius.

      • Altman may be a madman, but he's in no way a genius.
      • I don't know the Sam personally, but I have no doubt he's a genius-level marketer, just not a genius-level technologist. The guy sells visions better than Nostradamus did. You don't raise $64B at a $500B valuation on $18B in revenue while losing a multiple of that unless you are a certified guru-level Jedi master storyteller. The guy definitely warps reality with his narratives, and I give him a well-earned Marketing Genius stamp for that. So extremely smart, but not in an Albert Einstein way, more in a P.T

    • Re:They are on crack (Score:5, Interesting)

      by SpinyNorman ( 33776 ) on Saturday January 24, 2026 @04:50PM (#65946982)

      I wouldn't describe them as morons.. Altman is more of a shifty used car salesman, compulsive liar, type, maybe of average intelligence. Amodei is quite smart, but the money/power lust seems to have got to him, and in the last year he has jumped the shark and will now seemingly say anything and everything he can to hype AI.

      There was an interesting interview of Hassabis at Davos by
      Alex Kantrowitz (quite underrated as an interviewer - asks deceptively simple questions, and gets the guests to talk), where Hassabis, obviously no fan of Altman, calls him out by noting why does ChatGPT need to add advertizing (coming soon) if OpenAI really is on the cusp of massively valuable world altering AGI!

      Hassabis is the only sane voice in the entire AI field.

    • I agree with you, LLMs are not "intelligent", they mimic intelligence, and that's a fundamental limitation of autoregressive generation. The more interesting approach to me are diffusion models, particularly video diffusion models. I don't have a Nobel prize, but to me, those exhibit patterns that I'd be more willing to associate with AGI, at least from the perspective of understanding causality and the world it's operating in. The problem is diffusion models are very slow and don't play the quantization ga

  • Grifters are arguing about how along into the grift they are.
  • by taustin ( 171655 ) on Saturday January 24, 2026 @04:48PM (#65946978) Homepage Journal

    Follow the money.

    Google, whose main business is, and has always been, advertising, views AI of any kind as a tool for that business. They have to adopt a realistic view of it, lest they run into trouble with shareholders.

    AI companies, like Anthropic and OpenAI, do nothing but AI, and they have to view it as the be-all and end-all of human accomplishment, or they won't have any investors.

    Both are simply promoting shareholder value in the best way they know how.

    Note that fact and truth do not enter into this equation in any way.

  • by ianbnet ( 214952 ) on Saturday January 24, 2026 @04:53PM (#65946990)

    These points of view are not in opposition. They're just using different definitions of general intelligence.

    Yann and Demis correctly point out that there is no apparent path with our current approach to self-awareness, proactive intelligence, or truly novel thinking. To curiosity.

    Dario and Sam correctly point out that the models are already at least as intelligent as most people, are rapidly improving, and are giving humans superpowers. But they can't and won't be able to operate truly independently - _someone_ is steering and overseeing them.

    The headline doesn't make sense - there's no clash here. They're just defining things differently.

    • by SpinyNorman ( 33776 ) on Saturday January 24, 2026 @05:33PM (#65947060)

      > They're just using different definitions of general intelligence

      True.

      > Dario and Sam correctly point out that the models are already at least as intelligent as most people

      But that's not true.

      LLMs are kind of like idiot savants - great at some things, and piss-poor at others. Even in the things that they are great at, showing flashes of human level or expert intelligence, they are at the same time deeply flawed in ways that humans of average intelligence are not, continuing to hallucinate, and not understanding when their regurgitated human reasoning patterns actually apply or not. They give advice without thinking thought the consequences, then just say "my bad" and move on if you are knowledgeable enough to call them out.

      There are also huge gaps in LLM capability that even the most stupid of humans don't have, such as on the job learning. If you show a stupid human how to do something simple, enough times, they will eventually get it, and memorize the skill (flipping burgers, or bagging groceries), but the LLM never will. It may learn "in context" one day, but the next day, and the next, will be groundhog day when you have to teach it all over again. For any reasonably skilled job that takes months or years to learn and master, this having to train it every day is a non-starter.

      Altman's definition of AGI is something that can perform most "economically valuable jobs", such as his own used-car salesman job, but we are still far from that. About the only job than an LLM could do today, where it would be a viable replacement for a human, would be a call center job where it is doing something highly repetitive and non-creative, working in a narrow domain that it could have been trained to master, where it oes not need to learn anything, and where the consequences of messing up are fairly low (and chances of doing so maybe no worse than the humans it is replacing).

      • While LLM's aren't likely to be THE thing to finally crack AGI, they are a crucial part of future research into it. We will need an interface, and language is OUR interface. That being said, a lot of your thinking and reasons are flawed.

        >they are at the same time deeply flawed in ways that humans of average intelligence are not

        I think you are vastly over estimating "average" human intelligence.

        >continuing to hallucinate, and not understanding when their regurgitated human reasoning patterns actually a

    • by Junta ( 36770 ) on Saturday January 24, 2026 @05:54PM (#65947092)

      "AI models would replace the work of all software developers within a year"

      Unless there's a secret thing beyond the Claude Opus they are selling, they are no where near being able to claim this. This is undeniably a claim for human level intelligence, and just yesterday Opus wasn't even able to understand even how to properly invoke async functions in python (confusion about async for versus await).

      "we are already beginning to slip past human-level AGI"

      Sam is claiming not only is at as smart as humans, that it is smarter.

      "at least as intelligent as most people,"

      It isn't though. It can pull a lot of words from it's corpus in confident and convincing looking combinations. If you give it a search engine and ask it to summarize the results, it can look very up on current events (though it still will say some really stupid stuff). This has utility, but it's kind of like saying a calculator is 'more intelligent' than most humans because it can do math quickly.

      • by martin-boundary ( 547041 ) on Saturday January 24, 2026 @10:42PM (#65947416)

        Sam is doing something more insidious. He's describing humans as human-level AGI. That's planting into the zeitgeist the notion that there's no distinction between human beings and LLMs, only a matter of degree of capability. When enough sam-bots repeat that notion and use it in daily conversation, it becomes the default position that must be overcome.

        It's like saying that current horses surpass human level strength on people benchmarks. It implies the horse is a kind of person and can already beat them in pure strength, so if you wait a few more years you might find the next generation of horses will be opening restaurants and investing in rare earth mining operations.

  • by PPH ( 736903 ) on Saturday January 24, 2026 @05:12PM (#65947016)

    How Close Human-Level Intelligence Really Is

    Nowhere in sight.

  • by Teckla ( 630646 ) on Saturday January 24, 2026 @05:19PM (#65947026)
    The airplanes that have captivated the world are not a path to avian flight, two bird experts asserted in separate remarks at the National Audubon Society.
    • You are trying to use a metaphor to make a point, but it is not at all clear what your point actually is.

      When you use a metaphor, follow it up with a simple explanatory sentence. Your writing will be more clear, and your thinking less muddy.
  • We won't have human like intelligence until we can model the human brain with some degree of accuracy, and we don't know enough about the human to do that yet.

    There may be other kinds of 'intelligence' that can serve. If they were discovered it would most likely be from experimenting with artificial evolution, but I don't see that happening anytime soon.

    • We won't have human like intelligence until we can model the human brain with some degree of accuracy, and we don't know enough about the human to do that yet.

      This doesn't necessarily follow. Airplanes don't fly by flapping their wings. One doesn't need to understand how birds fly in order to make airplanes. And even before airplanes, humans made lighter than air balloons and airships well before we understood how birds fly. In that case, it turned out that making mechanical objects that fly the way birds fly is pretty tough to do efficiently.

      • Airplanes don't fly by flapping their wings. One doesn't need to understand how birds fly in order to make airplanes.

        One does have to understand the same basic principles behind how birds can fly in order to make effective airplanes. They also use a curved wing, push against the air and so on.

        • The actual history of how we figured out flight only partially involves understanding birds. Otto Lilienthal https://en.wikipedia.org/wiki/Otto_Lilienthal [wikipedia.org] built the first gliders in the late 19th century, and did base those in a large part of how birds glide. But the flight of airplanes took how birds glide without flapping their wings and used a completely different approach to move things forward. The Wright flyers were inspired in part by Lilienthal's work, but their actual understanding of bird flight
      • by shoor ( 33382 )

        This is a bit nit-picky on my part, but I specifically said 'human like intelligence', not intelligence in general, whatever that might be. Regarding your analogy, I would say that airplane flight is not 'bird-like flight'.

        Also, just to even more nit-picky, analogies do not constitute proof.

        • To nitpick your nitpick, the analogy is not intended as proof. The point is that we have historical examples (quite a few actually, that's just one of them) where we made a technology that duplicated some aspect of something in nature without understanding the natural thing well at all. It may require understanding the human brain, it might not. But the point is that the track record for such claims is iffy at best.
  • At this point it's just semantics.
    LLM tech at the current level is incredibly useful and it's not slowing down.
    People like to obsess over if the LLM "thinks" like a human but then again, humans tend to see the world only as a human sees it.
    Truth is, in many areas a good LLM can easily outperform a human.
    I'm spending just hours on projects that before LLMs I would have spent weeks just learning syntax and finding obscure bugs.
    Regardless of the nomenclature, LLMs are definitely a paradigm shift for humanity.

    • by Junta ( 36770 )

      LLMs have utility, but they aren't having the sort of utility that Anthropic and OpenAI are stating.

      They have a number of executives *convinced* they are just a couple of months away from being able to just prompt their way to the software they want and lay off anyone that could actually fix the problems. They've been a few months away from doing this since 2024. I saw one of these 'non-coders' article just last month and showed off their result... which didn't do what he asked it to make, and was glitchy

  • So some argued we'll have [insert type of AI software here] in 5-10 years? While others argued they are being overly optimistic and it will take much loner?
  • by ffkom ( 3519199 ) on Saturday January 24, 2026 @06:36PM (#65947146)
    ... much more than I would like to (by my current employer), I cannot see a path from LLM-based "Claude" to "human level intelligence", either. The experience is more like working with a hyperactive child that has read all the literature, but suffers from ADHS and Alzheimers at the same time, and really is not able to follow through with even simple instructions, making the same stupid mistakes time and again.

    To me, LLMs are nice as a method of exploring documentation in way that is fast-but-superficial, and Claude is ok for doing boring code re-formatting or test-case-writing stuff, but nothing much beyond that... at least when the quality and correctness of the result matters. Which also explains why so many pixel-pushers are happy with Claude - for them correctness wasn't much of a concern even before the advent of LLMs.

    What I find really scary is the speed at which people atrophy their brains when habitually using LLMs, this seems to be no less dramatic than what happens when people stop walking and use a mobility scooter for every distance.
    • by ianbnet ( 214952 )

      The atrophy is real. This is a HUGE issue. Vivienne Ming has written on and studied this quite a lot - she's worth a follow

    • I use Llama - FB's LLM - a few times during a day, but exclusively in cases like, "Explain how Euler's equation is useful in signal processing" type questions. I asked it to do a simple coding query, once, and it came back worse than if I hadn't, so I stopped trying, there. Besides, I like doing those things myself.

      Having said that, on a lark a few nights ago, I asked it how I would implement biphasic stimulation for a cochlear implant, and it pointed me to H-bridges, which was as surprising as useful,
  • by wickerprints ( 1094741 ) on Saturday January 24, 2026 @06:55PM (#65947176)

    Knowledge is an aspect of intelligence, but having it is not intelligence itself. And LLMs are "knowledgeable" in the sense that they have at their disposal vast datasets of human-compiled information. This is not in doubt.

    In the sense that a machine can, through various algorithms, look up that information and through a statistical model, produce output that simulates understanding of the meaning of that knowledge, that can constitute "intelligent" behavior. Because its ability to retain and recall information is superior to humans both in speed and breadth, it can do things with that information that, in comparison, humans cannot do. Again, this is not controversial. Prior to LLMs, such capabilities have been true of many specialized domains--mathematical computation, chess, search engines. LLMs simply constitute a new model for information synthesis and retrieval.

    Can these algorithms generate "new" truths (where by "new" we mean things that humans previously did not know)? Yes. But these truths are found by extensive analysis and synthesis of the knowledge present in the training data, through a computational process that is only in some ways more efficient than a human.

    Does this behavior constitute "intelligence?" In some limited respects, yes, but it cannot be said that it is anything like human intelligence. The latter is far more nuanced. Human intelligence is inextricably entertwined with understanding, emotion, creativity, imagination, passion, will, desire. It is a disservice to ourselves and to that which we create to believe that LLMs could ever achieve this kind of intelligence. We may be able to get it to simulate aspects of it in some fashion, but as long as we do not understand the origin of consciousness, it will never be truly capable of human intelligence.

    Whether we as a species should be pursuing the development of an artificial consciousness is a separate question.

    • "Can these algorithms generate "new" truths (where by "new" we mean things that humans previously did not know"

      Explicitly questioned, I have been assured by my DDG.ai that it cannot produce "new" truths. It can ( I imagine ) only repeat its training data or multiplexes thereof; depends on the prompt. Or perhaps DDG.ai runs cheap LLMs.

      I am ignoring the "trivial" case(s) of knowing how many "splines" dissect the 15-th oak-leaf fallen from the tree 30 yards N. from your house.
  • by ET3D ( 1169851 ) on Sunday January 25, 2026 @10:13AM (#65947856)

    People are largely idiots. They spend their time consuming media and trying to do selfish things for their enjoyment, while serving as wage slaves to corporations, where most of them do nothing of actual importance. AI hallucinations are nothing compared to humans' stupid beliefs. Even academia is filled with bad research and some outright fraud.

    So it's not clear to me what is this "human level intelligence" that people want AI to aspire to.

  • This is nascent thinking in need of further elaboration and development:

    Evolution selectively adapts biological organisms to fit well into environments over time
    This is a natural design process
    Natural design has created some amazing biological mechanisms with capabilities that outperform artificial mechanisms when all tradeoffs are considered (including *ilities and efficiencies)
    One of the things that natural design has created is a biological entity capable of artificial design
    Artificial design uses

  • And not a drop to drink?

  • It all depends on your definition of intelligence. I use LLMs daily, for various tasks. They have obvious limitations, but used for what they are good at, wow.

    My neighbor, Joe Sixpack, is also good at some things, and also has obvious limitations. Overall, honestly, the LLM is more useful.

    How do you want to define intelligence? Knowledge - LLMs win. Ability to discuss abstract philosophy - LLMs win. Ability to solve a quartic equation (did this yesterday) - LLMs win. Apparent sentience - LLMs win. Abili

  • On this post that the people who are saying that we will have AGI in 2 years are people in c-suites. These people have a vested interest to keep investing dollars inside of their organization. As soon as you have a few of them saying we can't do AGI most investing in the AI research will cease. There is money in ai and the CEOs know it but it's not going to come in the form of AGI. These people are smart enough to realize that they can't come up with the AGI but they can come up with the way to sell AGI and

  • The debate over AGI may be somewhat academic for many business leaders. The more pressing question, says Cognizant CEO Ravi Kumar, is whether companies can capture the enormous value that AI already offers.

    I challenge someone to provide actual proof of this supposed enormous value. Thus far we've seen a gigantic amount of money slung at AI, for it to have the ability to sort of summarize reports, sometimes code things a little quicker than humans but in ways that require humans to expend tremendous effort sorting out the chaff after the fact, and capable of creating tremendous hallucinations not only within themselves, but within the C-suites of every company that makes more than a few thousand dollars a year

  • you realise that we already HAVE superintelligence, Called a human being, The better. You're chasing the dragon, And it doesn't end well for you. But of course it doesn't matter to you because money right? Wrong.

If you are smart enough to know that you're not smart enough to be an Engineer, then you're in Business.

Working...