Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Technology

AI 'Bubble' Will Burst 99% of Players, Says Baidu CEO (theregister.com) 75

Baidu CEO Robin Li has proclaimed that hallucinations produced by large language models are no longer a problem, and predicted a massive wipeout of AI startups when the "bubble" bursts. From a report: "The most significant change we're seeing over the past 18 to 20 months is the accuracy of those answers from the large language models," gushed the CEO at last week's Harvard Business Review Future of Business Conference. "I think over the past 18 months, that problem has pretty much been solved -- meaning when you talk to a chatbot, a frontier model-based chatbot, you can basically trust the answer," he added.

Li also described the AI sector as in an "inevitable bubble," similar to the dot-com bubble in the '90s. "Probably one percent of the companies will stand out and become huge and will create a lot of value or will create tremendous value for the people, for the society. And I think we are just going through this kind of process," stated Li. The CEO also guesstimated it will be another 10 to 30 years before human jobs are displaced by the technology. "Companies, organizations, governments and ordinary people all need to prepare for that kind of paradigm shift," he warned.

This discussion has been archived. No new comments can be posted.

AI 'Bubble' Will Burst 99% of Players, Says Baidu CEO

Comments Filter:
  • by HBI ( 10338492 ) on Monday October 21, 2024 @01:52PM (#64881861)

    If you constrain the answers from the LLM, you can avoid the hallucinations. Of course, you're creating a jury-rigged system of input control which requires constant maintenance. Whether you've also constrained the usefulness of the answers is debatable.

    As far as 99% of the players going bust, that is easy to predict. Even Gartner essentially says that with their hype cycle.

    • by Hadlock ( 143607 )

      The current solution seems to be to have a second AI evaluate something like, "is this a valid request?" and then if true, pass it along

      • In other words, to understand the Ultimate Answer, you need to produce the Ultimate Question which requires much more sophisticated AI.
      • Why not have 3 AIs that must be in agreement? We can call them Melchior, Balthasar, and Caspar.

        • by GoTeam ( 5042081 )

          Why not have 3 AIs that must be in agreement? We can call them Melchior, Balthasar, and Caspar.

          Because they rarely ever report back to the king... I mean requestor...

      • by taustin ( 171655 )

        That amounts to "training an AI on AI generated sources," which has been pretty conclusively proven to provide garbage output.

        • I'm not sure how filtering input to receive good output has anything to do with the completely separate training step, that happens wholly before the "using the model" step

        • by Rei ( 128717 )

          This is a common internet myth (including some misunderstanding of papers). The reality is that synthetic data is a growing portion of models with every new generation, and the models just keep getting better. What it's not is "random synthetic data".

      • by HiThere ( 15173 )

        Well, that's pretty much guaranteed to be a step towards getting the correct answer. It's one of the steps people follow. If you watch yourself thinking, first you generate a guess, and then you evaluate whether you think it's worth refining. And AFAIKT, the original guess if pretty much "what have I seen in this context before", so that's basically the first step of an LLM response. Now they need a step that looks as the answer and says "if this were true, what would it imply, and can I believe that?".

      • Sounds like a GAN would be perfect for this type of stuff.

    • If you constrain the answers from the LLM, you can avoid the hallucinations.

      What does that mean, "constrain the answers"? Constrain them to what?

      • by HBI ( 10338492 )

        There have been some interesting articles about this topic. Cross-referencing multiple LLMs is one concept. Another, more common one is based on pure input control, disallowing certain prompts. All have the net effect of limiting the answers given, aka output.

        Your opinion of this and its efficacy for your use case is intensely personal.

        • There have been some interesting articles about this topic. Cross-referencing multiple LLMs is one concept.

          i.e., you're saying to have one LLM filter the results of the first one, throwing out the bad answers? Not sure if I would have labeled that "constraint," but ok, sounds like an approach, as long as the two models have different training.

          Another, more common one is based on pure input control, disallowing certain prompts.

          How do you decide what inputs to disallow?

          All have the net effect of limiting the answers given, aka output. Your opinion of this and its efficacy for your use case is intensely personal.

          not sure what this means, but yes, I expect everybody's use case is going to be different.

      • Look up retrieval augmented generation.

        Basically you add specific context to your system and it saves it off. That is then used to constrain what is generated.

        Obviously if you provide crap context, youâ(TM)ll still get crap out put.

        Notebooklm from Google is a good example of this that you can use, without needing to roll your own solution

        • by Rei ( 128717 )

          RAG is not "constraining the answers". If anything, it's more unconstrained, since you have basically the entire internet's worth of data as a potential input, and you're teaching it to partially or completely trust it, rather than to teach the model to not give an answer if it's unlikely to be certain about it.

          RAG has pros and cons. To make the "cons" clear, you know how people are constantly complaining about Google's "AI Overview" getting things wrong? That's a tiny pure-RAG summarization model. It im

          • I think you need to do some more research.

            Crap in, crap out. If you are using all of the Internet, you will get crap.

            If you constrain the input of the rag model, and the questions asked, it will do a generally, to amazingly, great job.

            Upload the original wizard of oz from Gutenberg to a rag system.

            Then ask about Dorthyâ(TM)s ruby shoes.

            Vs ask that question to a general llm that is not constrained by the actual story

            • by Rei ( 128717 )

              Searching the internet is not "searching a book to ask questions about that book". The internet is awash in trolling, scams, parody, and misinformation about every topic. It is essential, if you want to have a good RAG product, to train your model to not simply summarize the provided sources, but to assess them first.

      • by Rei ( 128717 )

        Exactly. The person has no clue what they're talking about.

    • As far as 99% of the players going bust, that is easy to predict. Even Gartner essentially says that with their hype cycle.

      Just remember, during a gold rush, invest in shovels!

      • by AvitarX ( 172628 )

        I suspect it's already too late to invest in Nvidia.

        • When the pandemic crash happened, I looked at the three major chip makers and chose INTC and AMD erm uh had I chosen NVDA that small bit of money would now be 50x instead of 1/2 and 3x... Oh look its time for my daily self flogging.
          • by AvitarX ( 172628 )

            I feel the same way about Bitcoin.

            Long transactions with high fees, this is a fun novelty and maybe something will iterate the technology and be good, but Bitcoin is useless.

            Oh look, it's gone from 500 to 5,000, obviously it's way stupid now, but somehow it's at 50,000

            • by Rei ( 128717 )

              I briefly considered mining Bitcoin in the early days (I think after I first heard about it here [slashdot.org], but then thought... meh, this is stupid Libertarian nonsense, why should I waste my cpu time and power bill on it?

              It didn't occur to me how obsessed Libertarians would get with it, and how much money they have.

              • by AvitarX ( 172628 )

                It's objectively so bad for it's purported main use.

                It's has its uses though, but I'd never use it as a stored value or for easy purchases.

                I do use it to buy monero periodically, which I then use to buy some consumables.

                The quality and convenience of those consumables vs local cash market is well worth the crypto annoyance.

      • by HiThere ( 15173 )

        Actually, during the California gold rush, the folks who profited sold groceries or clothes. Some sold real estate, but that was chancy. Perhaps it was different in the Klondike, but I wouldn't bet on it. If you're wealthy enough, invest in railroads, and lobby for a government subsidy.

        This doesn't yield as straightforwards a investment plan, but the one it does yield is probably sounder.

    • by Rei ( 128717 )

      Finetune datasets are vastly smaller than foundation datasets. You cannot have a finetune dataset (curated question/answer example pairs) that covers everything the user might possibly ask about.

  • by crunchy_one ( 1047426 ) on Monday October 21, 2024 @01:55PM (#64881875)

    meaning when you talk to a chatbot, a frontier model-based chatbot, you can basically trust the answer

    • Re: (Score:3, Insightful)

      You can trust the answer as much as you can trust the first hit on Google search results. In other words - YMMV.
      • And Baidu wants to be there after the wreckage they predict, believing their own magic potion will win.

        Which it won't. AI is still very much in its infancy, and its tantrums don't stop until after adolescence.

        If he actually believes frontier LLM results, then eating his own dogfood will bankrupt him. It's an arms race, and it's only slowly gaining coherence and traction. But the baby is SO CUTE.

  • by narcc ( 412956 ) on Monday October 21, 2024 @02:00PM (#64881893) Journal

    "I think over the past 18 months, that problem has pretty much been solved" meaning when you talk to a chatbot, a frontier model-based chatbot, you can basically trust the answer,"

    This is very silly, but it doesn't look like he actually believes that:

    The CEO also guesstimated it will be another 10 to 30 years before human jobs are displaced by the technology.

    It looks like the CEO has already been replaced by a chatbot...

    • by HiThere ( 15173 )

      Well, his comment that it will be decades before it replaces human jobs is clearly so wrong that he must know it's wrong. It's already replacing some jobs. Just today I read about a company that's planning (well, in the process of, but process not complete) to have a chatbot do the first round of handling customer requests for a refund (with limits on the size of the refund the chatbot would be authorized to grant.)

      • by narcc ( 412956 )

        well, in the process of, but process not complete

        We've seen quite a few of those stories. They all end the same way.

  • by Targon ( 17348 ) on Monday October 21, 2024 @02:07PM (#64881923)

    People don't seem to have an understanding between a boom and a bubble. A boom will be when there is a surge in companies getting into any given area. A bubble is when you have USELESS companies jumping into that sector, they don't have product, they don't have expertise, but they have venture capital money. When there is a large number of useless companies jumping into a booming sector, that becomes a bubble that will eventually pop.

    Hype...yea, let's take an area that has the potential for growth, and generate so much hype that useless companies start to pop up just to take advantage of the boom, and then, 2-3 years later, they collapse.

  • by RogueWarrior65 ( 678876 ) on Monday October 21, 2024 @02:10PM (#64881937)

    In the dotcom era, anyone could build a website and startups blew their funding on expensive offices, Herman-Miller chairs, Silicon Graphics workstations, and a powered paragliders (yes I saw all of this at a bankruptcy auction in Santa Monica). None of that stuff was required to build a product. LLMs require a lot of expensive infrastructure and you need the proverbial killer-app to pitch to investors who are a little bit smarter than they were 20+ years ago. That's going to increase the signal-to-noise ratio.

    • I wanted an SGI but all they'd give me was an Indy. At least it had the Nintendo 64 dev board installed in it.

    • Hmm. Maybe. You could be right.

      Or maybe in the 90s investors could understand a website that sold dogfood, how it worked and how it could make money. A lot of those sites that went under are essentially back now in a different form.

      But now investors see ai and they think

      I have no idea how this works.
      I have no idea how people will actually use this.
      I have no idea how it will make profit.
      I have no idea how the tech will change in the future.

      But I want to 1000x every dollar I invest in a few years, and how els

  • When faced with a so-called solution, you really need to focus on what problem it's solving. I've seen generative AI that can create all kinds of clipart or thumbnail images which are good enough for a business presentation or a low-production-value YouTube video. That's solving a real problem that people have, and I see evidence of people using it (and digital artists looking to change careers). On the other hand, where are the legions of people who are using LLMs to solve real-world problems? High sch
    • by serviscope_minor ( 664417 ) on Monday October 21, 2024 @02:35PM (#64882027) Journal

      The next generation will do all the useful things. We are on the next, you say? I mean the next next one of course. Now hand me that briefcase of VC cash.

    • by Whateverthisis ( 7004192 ) on Monday October 21, 2024 @02:42PM (#64882069)
      I'm a huge AI skeptic. However...

      I have found uses for it. My family and I go on relatively long vacations to places we've never been, and the research to plan for it can sometimes outstrip the time we actually are on the vacation. ChatGPT does a great job of condensing a rather large amount of information into a condensed, reasonably well organized narrative that cuts down that research tremendously.

      I've also used it for creating marketing material at work and designing entire D&D campaigns. Again, because it's condensing a lot of known information on the internet into a format I can then get the gist of what I want, and from there edit and expand to what I actually want.

      So using my own anecdotal experiences, when you're starting from a blank sheet of paper, that can be very difficult for most humans. But what humans are really excellent at is taking something in front of them and determining if it's good enough or not, and if not how to edit it to make good enough. So LLMs do overcome the "blank-sheet-of-paper", or "zero to one" phase of developing something. Can they produce a good finished product? No. Will they ever? Maybe but doubtful. Do they operate well in the edge cases of human language like politics and opinions and subjectivity? No and probably never will. But there is a value in giving you a starting point on a project and then working from there.

      It remains to be seen however if we're willing to pay for that and how much, and how much of a business that becomes to justify the investment. In that sense, the CEO of Baidu is 100% correct, many companies will crash.

      But in the recent article discussing MicroSoft's role with OpenAI, I think Microsoft is doing the right thing. There is no value in the work to research and get a business insight, there is only value in the insight. So if an AI agent can assess language in emails, CRMs, financials, or what have you and create a correct, actionable task that will be a useful thing to do at work and the person didn't need to do the research to figure that out, then there's a place for it in the enterprise domain.

      Artificial General Intelligence? Yeah, no. I created my own Intelligences thank you very much. They're about to hit puberty and have very strong and ridiculous opinions about all sorts of things (and require constant power generation in the form of dinner), and I question the sources of data feeding them all the time (at school), and they rarely function the way I want them to. Why would I want an artificial one causing me the same problems? No thank you, no value there.

      • These examples are good, and they have one thing in common. You either do not worry about the accuracy of the output, or you have enough expertise to easily correct and make use of it despite the errors.

        This is the huge issue with text generators. They're really good at assembling something which will look correct (if often a bit verbose) but which has subtle, and sometimes gross, faults.

        And this is the main issue with it being used to create actionable tasks, especially when that is automated and the tasks

        • These examples are good, and they have one thing in common. You either do not worry about the accuracy of the output, or you have enough expertise to easily correct and make use of it despite the errors.

          Google... Stackoverflow... You need to know how to do your job, duh.
          I'm not an AI investor. I have no money riding on this game but I use LLM daily, I am a technical professional.

          Next time someone complains about LLM I want you to replace it in your head with cordless drill and see how dumb it sounds.

          It will replace jobs... no. Well maybe in a very roundabout way possibly? Am I worked up about it, no, we survived calculators and power tools, we'll be fine.
          Power tools are only for "very, very beginner carp

          • The whole point of the latest AI craze with agents is that they'll work autonomously and create tasks. There's a whole slew of startups doing that.

            And AI is being sold as allowing anyone to instantly have any expertise at their fingertips. That's what my comment is based on.

    • You know when your boss asks you to put together a bullshit presentation that is a waste of everyone's time? LLMs are great for that!
      • Bingo. The problem is that there are way too many people employed creating B.S. presentations and doing other things that glorify their managers' existence. Among them are plenty who can't just switch to a trade or healthcare job due to physical limitations. Though I may not like it, I'd much rather see companies continue paying people to do B.S. jobs than to see them on welfare.

      • by RobinH ( 124750 )
        I'm well aware of the fact that there's a middle-management layer in corporations where this kind of waste happens, but it's important to realize that the money to pay these people comes out of real productivity (somewhere) in the organization. In the case of something like an automobile factory it's the people (or robots) attaching components onto the frame, or at the foundry where we need to use massive amounts of energy to heat a batch of steel to the right temperature, or in the case of Starbuck's it's
    • Seriously, if LLMs were as valuable a the hype claimed, you'd already be seeing a profound impact, and we're simply not seeing it.

      It depends where you are looking.

      If you have a Microsoft Copilot enterprise license, Microsoft Dynamics 365 (their flagship ERP system) and the extensive setup needed, you can do some very nice natural language queries of the database that would be a pain otherwise:

      How many jeans in style 501 that are black and size 35 can I have delivered to 123 Anystreet, SomeTown, SomeState by Friday?

      We have customers using that right now; their customer support call centers can use these natural language queries whils

    • by gweihir ( 88907 )

      I found two uses so far: Better lies ("how do I say xyz in a positive way") and better search, but with very strong limitations as asking for references often fails or produces very little. I had one instance of something I was searching but did not know the term for. ChatCPT provided several candidates for the search term and conventional DuckDuckGo got me what I was looking for. That is basically it. Not that impressive, more an incremental improvement. Useful, yes, but not a game-changer.

      To be sure, this

    • by HiThere ( 15173 )

      Actually I've been finding it quite useful for picking up how to use wxpython. Ask about a control and it pops up an example of how to use it. Basically this is glorified search, as the example is out there, but it's also extremely useful.
      OTOH, I wouldn't ask it for an answer that wasn't just standard. But it's great for an example of "this is how you specify a checkbox control", where at the wxpython site you need to already know the exact name of the widget before you can usefully search.

  • by VeryFluffyBunny ( 5037285 ) on Monday October 21, 2024 @02:22PM (#64881973)
    He read about it in a magazine once. Apparently, people react positively to the term so he's going to use it.
  • by zkiwi34 ( 974563 ) on Monday October 21, 2024 @02:35PM (#64882025)
    Was that his prediction, or an AI prediction?
    • by HiThere ( 15173 )

      Not really. It's a PR emission of "this is what people want to hear that will reassure both my investors and the common folk". I doubt that he's stupid enough to believe it. For example, AI is already taking some people's jobs, despite what he said about not for decades yet.

  • What usually happens is something new and hot comes out and everyone things it will solve all society's ill and and bring world peace. So everyone jumps on board.
    After a while, when it becomes clear to most what this new thing is actually capable of and where it is actually useful, then most people jump off it onto the next new shiny thing.
    examples include bluetooth, agile, self-driving cars, EVs
  • I wonder what happens when you string together AI-bots to play telephone? Do the responses converge to a single response or do they gyrate wildly?

  • Hallucinations are alive and well. So is overlooking of critical details. You just need to ask something that you could not have readily looked up in Wikipedia.

    I do agree to 10-30 years before significant job loss. I do expect specialized models will make that likely. It will mostly affect "bullshit" jobs, but there are many of those.

    • by HiThere ( 15173 )

      Depends on what you mean by "significant". If it's your job being automated today, it's already significant, and for some people it's already their job. More frequently it will be a modification of the job so that fewer people need to be hired. For a lot of things, you don't really need to understand what you're doing if you can manipulate the symbols properly.

      For one example, I expect a real drop in the number of employed paralegals. And that's often the first job a lawyer get at a firm. It doesn't re

      • by gweihir ( 88907 )

        Personally, I expect most current job losses will get rolled back as LLMs cannot perform. In a mindless hype like this one it always takes a few years for the adopters to find out they have screwed themselves.

  • The technology is that good but it won't start displacing jobs for another 10-30 years? I guess he doesn't want to cause panic but imagine how powerful the tech will have become in another five years? AI is advancing ata geometric rate, we can probably expect self-awarenesss by August next year!
    • by HiThere ( 15173 )

      FWIW, I'm still predicting AGI by 2035, plus or minus 5 years. And I'm not talking about "godlike AI", merely one that is unrestricted in it's learning capabilities. I expect it to (probably) develop out of an AI that's used as a human interface for controlling some kind of robot body. I see controlling the body as necessary to allow it to understand (as opposed to just talking about) physical reality.

      • by narcc ( 412956 )

        FWIW, I'm still predicting AGI by 2035, plus or minus 5 years.

        On what basis?

        I see controlling the body as necessary to allow it to understand

        Why? Meaningless numbers externally associated with text tokens aren't any different from the meaningless numbers externally associated with sensors input.

        • It would be pretty easy to assign ideal ranges for those sensors, so that the inputs aren't just 'meaningless'. For a robot an optimal temperature is different to a human but there is definitely a concept of better/worse. Same with light levels and noise. Is that a useful way of looking at it? I imagine there is a generally agreed test or definition of AGI but one of the fundamentals would seem to be awareness of surroundings and ability to express preferences
          • by narcc ( 412956 )

            It would be pretty easy to assign ideal ranges for those sensors, so that the inputs aren't just 'meaningless'. [...] Same with light levels and noise. Is that a useful way of looking at it?

            The inputs are still just numbers. Bounding an input doesn't change that. A number that, to us, represents a temperature is indistinguishable from a number representing a smell or a number representing a letter. The actual number doesn't carry with it whatever meaning we impose on it, it's just a number. The problem of deriving meaning from symbols is unsolved. A lot of the nonsense you've seen about the need for 'bodies' actually comes from early attempts to avoid addressing that particular question.

            I imagine there is a generally agreed test or definition of AGI

            Th

  • He should lay off that pipe.

  • All the CEO means is that the answer can be trusted to be true to the training data, which brings up the old adage 'garbage in, garbage out'.

    It will be fake shit true to the underlying fake input, probably riddled with advertisements and political agendas. Doesn't mean it is actually any better than what we have now.
  • he 's hallucinating again.
  • So hallucinations are solved, huh?

    I think I have yet to get a correct "answer" from an LLM. People sometimes tell me it's great for search. I sometimes try to use it to find titles of movies, books etc. from descriptions or some facts. Not easy blockbusters, of course, but still, things that are sufficiently documented on IMDb and the likes. Still, the answers are always fake titles and authors and facts fabricated by the model.

    Meanwhile, a local newspaper's "more about this story" bot freely assembles fact

"The only way for a reporter to look at a politician is down." -- H.L. Mencken

Working...