Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Technology

AI 'Bubble' Will Burst 99% of Players, Says Baidu CEO (theregister.com) 48

Baidu CEO Robin Li has proclaimed that hallucinations produced by large language models are no longer a problem, and predicted a massive wipeout of AI startups when the "bubble" bursts. From a report: "The most significant change we're seeing over the past 18 to 20 months is the accuracy of those answers from the large language models," gushed the CEO at last week's Harvard Business Review Future of Business Conference. "I think over the past 18 months, that problem has pretty much been solved â" meaning when you talk to a chatbot, a frontier model-based chatbot, you can basically trust the answer," he added.

Li also described the AI sector as in an "inevitable bubble," similar to the dot-com bubble in the '90s. "Probably one percent of the companies will stand out and become huge and will create a lot of value or will create tremendous value for the people, for the society. And I think we are just going through this kind of process," stated Li. The CEO also guesstimated it will be another 10 to 30 years before human jobs are displaced by the technology. "Companies, organizations, governments and ordinary people all need to prepare for that kind of paradigm shift," he warned.

AI 'Bubble' Will Burst 99% of Players, Says Baidu CEO

Comments Filter:
  • by HBI ( 10338492 ) on Monday October 21, 2024 @02:52PM (#64881861)

    If you constrain the answers from the LLM, you can avoid the hallucinations. Of course, you're creating a jury-rigged system of input control which requires constant maintenance. Whether you've also constrained the usefulness of the answers is debatable.

    As far as 99% of the players going bust, that is easy to predict. Even Gartner essentially says that with their hype cycle.

    • by Hadlock ( 143607 )

      The current solution seems to be to have a second AI evaluate something like, "is this a valid request?" and then if true, pass it along

      • In other words, to understand the Ultimate Answer, you need to produce the Ultimate Question which requires much more sophisticated AI.
      • Why not have 3 AIs that must be in agreement? We can call them Melchior, Balthasar, and Caspar.

        • by GoTeam ( 5042081 )

          Why not have 3 AIs that must be in agreement? We can call them Melchior, Balthasar, and Caspar.

          Because they rarely ever report back to the king... I mean requestor...

      • by taustin ( 171655 )

        That amounts to "training an AI on AI generated sources," which has been pretty conclusively proven to provide garbage output.

      • by HiThere ( 15173 )

        Well, that's pretty much guaranteed to be a step towards getting the correct answer. It's one of the steps people follow. If you watch yourself thinking, first you generate a guess, and then you evaluate whether you think it's worth refining. And AFAIKT, the original guess if pretty much "what have I seen in this context before", so that's basically the first step of an LLM response. Now they need a step that looks as the answer and says "if this were true, what would it imply, and can I believe that?".

      • Sounds like a GAN would be perfect for this type of stuff.

    • If you constrain the answers from the LLM, you can avoid the hallucinations.

      What does that mean, "constrain the answers"? Constrain them to what?

      • by HBI ( 10338492 )

        There have been some interesting articles about this topic. Cross-referencing multiple LLMs is one concept. Another, more common one is based on pure input control, disallowing certain prompts. All have the net effect of limiting the answers given, aka output.

        Your opinion of this and its efficacy for your use case is intensely personal.

        • There have been some interesting articles about this topic. Cross-referencing multiple LLMs is one concept.

          i.e., you're saying to have one LLM filter the results of the first one, throwing out the bad answers? Not sure if I would have labeled that "constraint," but ok, sounds like an approach, as long as the two models have different training.

          Another, more common one is based on pure input control, disallowing certain prompts.

          How do you decide what inputs to disallow?

          All have the net effect of limiting the answers given, aka output. Your opinion of this and its efficacy for your use case is intensely personal.

          not sure what this means, but yes, I expect everybody's use case is going to be different.

      • Look up retrieval augmented generation.

        Basically you add specific context to your system and it saves it off. That is then used to constrain what is generated.

        Obviously if you provide crap context, youâ(TM)ll still get crap out put.

        Notebooklm from Google is a good example of this that you can use, without needing to roll your own solution

    • As far as 99% of the players going bust, that is easy to predict. Even Gartner essentially says that with their hype cycle.

      Just remember, during a gold rush, invest in shovels!

      • by AvitarX ( 172628 )

        I suspect it's already too late to invest in Nvidia.

      • by HiThere ( 15173 )

        Actually, during the California gold rush, the folks who profited sold groceries or clothes. Some sold real estate, but that was chancy. Perhaps it was different in the Klondike, but I wouldn't bet on it. If you're wealthy enough, invest in railroads, and lobby for a government subsidy.

        This doesn't yield as straightforwards a investment plan, but the one it does yield is probably sounder.

  • meaning when you talk to a chatbot, a frontier model-based chatbot, you can basically trust the answer

  • "I think over the past 18 months, that problem has pretty much been solved" meaning when you talk to a chatbot, a frontier model-based chatbot, you can basically trust the answer,"

    This is very silly, but it doesn't look like he actually believes that:

    The CEO also guesstimated it will be another 10 to 30 years before human jobs are displaced by the technology.

    It looks like the CEO has already been replaced by a chatbot...

    • by HiThere ( 15173 )

      Well, his comment that it will be decades before it replaces human jobs is clearly so wrong that he must know it's wrong. It's already replacing some jobs. Just today I read about a company that's planning (well, in the process of, but process not complete) to have a chatbot do the first round of handling customer requests for a refund (with limits on the size of the refund the chatbot would be authorized to grant.)

      • by narcc ( 412956 )

        well, in the process of, but process not complete

        We've seen quite a few of those stories. They all end the same way.

  • by Targon ( 17348 ) on Monday October 21, 2024 @03:07PM (#64881923)

    People don't seem to have an understanding between a boom and a bubble. A boom will be when there is a surge in companies getting into any given area. A bubble is when you have USELESS companies jumping into that sector, they don't have product, they don't have expertise, but they have venture capital money. When there is a large number of useless companies jumping into a booming sector, that becomes a bubble that will eventually pop.

    Hype...yea, let's take an area that has the potential for growth, and generate so much hype that useless companies start to pop up just to take advantage of the boom, and then, 2-3 years later, they collapse.

  • by RogueWarrior65 ( 678876 ) on Monday October 21, 2024 @03:10PM (#64881937)

    In the dotcom era, anyone could build a website and startups blew their funding on expensive offices, Herman-Miller chairs, Silicon Graphics workstations, and a powered paragliders (yes I saw all of this at a bankruptcy auction in Santa Monica). None of that stuff was required to build a product. LLMs require a lot of expensive infrastructure and you need the proverbial killer-app to pitch to investors who are a little bit smarter than they were 20+ years ago. That's going to increase the signal-to-noise ratio.

    • I wanted an SGI but all they'd give me was an Indy. At least it had the Nintendo 64 dev board installed in it.

  • When faced with a so-called solution, you really need to focus on what problem it's solving. I've seen generative AI that can create all kinds of clipart or thumbnail images which are good enough for a business presentation or a low-production-value YouTube video. That's solving a real problem that people have, and I see evidence of people using it (and digital artists looking to change careers). On the other hand, where are the legions of people who are using LLMs to solve real-world problems? High sch
    • The next generation will do all the useful things. We are on the next, you say? I mean the next next one of course. Now hand me that briefcase of VC cash.

    • I'm a huge AI skeptic. However...

      I have found uses for it. My family and I go on relatively long vacations to places we've never been, and the research to plan for it can sometimes outstrip the time we actually are on the vacation. ChatGPT does a great job of condensing a rather large amount of information into a condensed, reasonably well organized narrative that cuts down that research tremendously.

      I've also used it for creating marketing material at work and designing entire D&D campaigns.

      • These examples are good, and they have one thing in common. You either do not worry about the accuracy of the output, or you have enough expertise to easily correct and make use of it despite the errors.

        This is the huge issue with text generators. They're really good at assembling something which will look correct (if often a bit verbose) but which has subtle, and sometimes gross, faults.

        And this is the main issue with it being used to create actionable tasks, especially when that is automated and the tasks

    • You know when your boss asks you to put together a bullshit presentation that is a waste of everyone's time? LLMs are great for that!
      • Bingo. The problem is that there are way too many people employed creating B.S. presentations and doing other things that glorify their managers' existence. Among them are plenty who can't just switch to a trade or healthcare job due to physical limitations. Though I may not like it, I'd much rather see companies continue paying people to do B.S. jobs than to see them on welfare.

      • by RobinH ( 124750 )
        I'm well aware of the fact that there's a middle-management layer in corporations where this kind of waste happens, but it's important to realize that the money to pay these people comes out of real productivity (somewhere) in the organization. In the case of something like an automobile factory it's the people (or robots) attaching components onto the frame, or at the foundry where we need to use massive amounts of energy to heat a batch of steel to the right temperature, or in the case of Starbuck's it's
    • Seriously, if LLMs were as valuable a the hype claimed, you'd already be seeing a profound impact, and we're simply not seeing it.

      It depends where you are looking.

      If you have a Microsoft Copilot enterprise license, Microsoft Dynamics 365 (their flagship ERP system) and the extensive setup needed, you can do some very nice natural language queries of the database that would be a pain otherwise:

      How many jeans in style 501 that are black and size 35 can I have delivered to 123 Anystreet, SomeTown, SomeState by Friday?

      We have customers using that right now; their customer support call centers can use these natural language queries whils

    • by gweihir ( 88907 )

      I found two uses so far: Better lies ("how do I say xyz in a positive way") and better search, but with very strong limitations as asking for references often fails or produces very little. I had one instance of something I was searching but did not know the term for. ChatCPT provided several candidates for the search term and conventional DuckDuckGo got me what I was looking for. That is basically it. Not that impressive, more an incremental improvement. Useful, yes, but not a game-changer.

      To be sure, this

    • by HiThere ( 15173 )

      Actually I've been finding it quite useful for picking up how to use wxpython. Ask about a control and it pops up an example of how to use it. Basically this is glorified search, as the example is out there, but it's also extremely useful.
      OTOH, I wouldn't ask it for an answer that wasn't just standard. But it's great for an example of "this is how you specify a checkbox control", where at the wxpython site you need to already know the exact name of the widget before you can usefully search.

  • by VeryFluffyBunny ( 5037285 ) on Monday October 21, 2024 @03:22PM (#64881973)
    He read about it in a magazine once. Apparently, people react positively to the term so he's going to use it.
  • by zkiwi34 ( 974563 ) on Monday October 21, 2024 @03:35PM (#64882025)
    Was that his prediction, or an AI prediction?
    • by HiThere ( 15173 )

      Not really. It's a PR emission of "this is what people want to hear that will reassure both my investors and the common folk". I doubt that he's stupid enough to believe it. For example, AI is already taking some people's jobs, despite what he said about not for decades yet.

  • What usually happens is something new and hot comes out and everyone things it will solve all society's ill and and bring world peace. So everyone jumps on board.
    After a while, when it becomes clear to most what this new thing is actually capable of and where it is actually useful, then most people jump off it onto the next new shiny thing.
    examples include bluetooth, agile, self-driving cars, EVs
  • I wonder what happens when you string together AI-bots to play telephone? Do the responses converge to a single response or do they gyrate wildly?

  • Hallucinations are alive and well. So is overlooking of critical details. You just need to ask something that you could not have readily looked up in Wikipedia.

    I do agree to 10-30 years before significant job loss. I do expect specialized models will make that likely. It will mostly affect "bullshit" jobs, but there are many of those.

    • by HiThere ( 15173 )

      Depends on what you mean by "significant". If it's your job being automated today, it's already significant, and for some people it's already their job. More frequently it will be a modification of the job so that fewer people need to be hired. For a lot of things, you don't really need to understand what you're doing if you can manipulate the symbols properly.

      For one example, I expect a real drop in the number of employed paralegals. And that's often the first job a lawyer get at a firm. It doesn't re

      • by gweihir ( 88907 )

        Personally, I expect most current job losses will get rolled back as LLMs cannot perform. In a mindless hype like this one it always takes a few years for the adopters to find out they have screwed themselves.

  • The technology is that good but it won't start displacing jobs for another 10-30 years? I guess he doesn't want to cause panic but imagine how powerful the tech will have become in another five years? AI is advancing ata geometric rate, we can probably expect self-awarenesss by August next year!
    • by HiThere ( 15173 )

      FWIW, I'm still predicting AGI by 2035, plus or minus 5 years. And I'm not talking about "godlike AI", merely one that is unrestricted in it's learning capabilities. I expect it to (probably) develop out of an AI that's used as a human interface for controlling some kind of robot body. I see controlling the body as necessary to allow it to understand (as opposed to just talking about) physical reality.

  • He should lay off that pipe.

Center meeting at 4pm in 2C-543.

Working...