Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Technology

Jensen Huang Says Even Free AI Chips From Competitors Can't Beat Nvidia's GPUs 50

An anonymous reader shares a report: Nvidia CEO Jensen Huang recently took to the stage to claim that Nvidia's GPUs are "so good that even when the competitor's chips are free, it's not cheap enough." Huang further explained that Nvidia GPU pricing isn't really significant in terms of an AI data center's total cost of ownership (TCO). The impressive scale of Nvidia's achievements in powering the booming AI industry is hard to deny; the company recently became the world's third most valuable company thanks largely to its AI-accelerating GPUs, but Jensen's comments are sure to be controversial as he dismisses a whole constellation of competitors, such as AMD, Intel and a range of competitors with ASICs and other types of custom AI silicon.

Starting at 22:32 of the YouTube recording, John Shoven, Former Trione Director of SIEPR and the Charles R. Schwab Professor Emeritus of Economics, Stanford University, asks, "You make completely state-of-the-art chips. Is it possible that you'll face competition that claims to be good enough -- not as good as Nvidia -- but good enough and much cheaper? Is that a threat?" Jensen Huang begins his response by unpacking his tiny violin. "We have more competition than anyone on the planet," claimed the CEO. He told Shoven that even Nvidia's customers are its competitors, in some cases. Also, Huang pointed out that Nvidia actively helps customers who are designing alternative AI processors and goes as far as revealing to them what upcoming Nvidia chips are on the roadmap.
This discussion has been archived. No new comments can be posted.

Jensen Huang Says Even Free AI Chips From Competitors Can't Beat Nvidia's GPUs

Comments Filter:
  • Secret Sauce (Score:4, Insightful)

    by JBMcB ( 73720 ) on Monday March 11, 2024 @10:48AM (#64306591)

    The secret sauce is CUDA. AMD and Intel have been chasing nVidia for 15 years. AMD has completely changed it's software stack roughly four times. Intel has changed it's entire compute strategy four or five times (remember Phi?)

    If you are going to sink money into AI development, are you going to go with a new company, a company that has changed their compute strategy every two years for the last ten, or a company who offers a stable, mature, forwards and backwards compatible platform?

    • The secret sauce isn't "stability" it's Nvidia's anticompetitive behavior. It's easy to establish a monopoly when you use market dominance to bribe people into crippling your competitors products at the software level.

      • Source (Score:4, Insightful)

        by JBMcB ( 73720 ) on Monday March 11, 2024 @12:09PM (#64306821)

        The secret sauce isn't "stability" it's Nvidia's anticompetitive behavior. It's easy to establish a monopoly when you use market dominance to bribe people into crippling your competitors products at the software level.

        Are you talking about games or compute? Because we're talking about compute here. Did nVidia force AMD to change their software stack from CtM, to Stream, to HSAIL, to OpenCL, to ROCm?

        • by Luckyo ( 1726890 )

          He's talking about software side. Specifically software that interacts with software stack.

          CUDA doesn't do jack shit on its own. It's only relevant when integrated into software that actually does something useful.

          AMD's problem is that it's a hardware vendor, not a software one. But if you're OpenAI, Google, Meta, etc, you actually own the "thing that actually produces value" side of the software stack, rather than "thing that enables things that produce value" side that is CUDA.

          So it will take some work, b

          • That may be true, but you're missing the key point the GP made, that NVIDIA was being "anticompetitive". That just isn't born out in any of the history of CUDA development. Just because you're dominant and others are falling over themselves to catch up doesn't mean anything untoward is going on.

            • by Luckyo ( 1726890 )

              Except that of course we have reports from everyone ranging from nvidia OEMs to LLM makers that nvidia has routinely abused it's dominant GPU maker position for at least half a decade to stifle competition of any kind.

              Problem being that with it being the only game in town for specific uses, victims cannot really challenge it. Because while investigation is ongoing, you're getting zero shipments of nvidia products you need for your company to function and lose competition against your competitors.

      • I'll take "Talking out of my ass" for $1000, Alex.
    • Nvidia got an early lead and then used the money from that to hire all the engineers. Not all the best ones, all of them. AMD & Intel both have been struggling since because they don't have the money to compete on salary, and nvidia will pay people to do basically nada if it means keeping that engineer out of AMD & Intel's hands.

      It's actually backfired a bit. They paid a bunch of their top guys so much they're having a hard time getting 60-80hr work weeks out of them since they're all multi-mult
      • by JBMcB ( 73720 )

        Nvidia got an early lead and then used the money from that to hire all the engineers.

        What time frame are you talking about? Because when nVidia started working on CUDA in 2004, their market cap was $4 billion, AMD's was $8 billion, and Intel's was $146 billion.

    • All this computing power and still people can't tell it's from its. It's means it is.

    • Eyeroll.
      It's more than CUDA.
      The middleware layers are fine swapping APIs to keep up with Intel and AMDs attempts at keeping up. The problem is that training a 70B parameter ML models is a multi-million dollar job, and what are you going to buy to do your multi-million dollar job? Something that simply doesn't perform as well, and isn't commensurately cheaper?
  • by HBI ( 10338492 ) on Monday March 11, 2024 @10:48AM (#64306593)

    Live by the asset bubble, die by the asset bubble. It's just a fact of life. His task is to avoid the inevitable cratering of Nvidia's value when the others catch up to their price/performance enough to erode their profits. At least to avoid it for as long as he can.

    • Whether or not it should is a different question -- but do you see attempts at AI going away anytime soon? Or any other LLM tasks for that matter?
      I think his biggest task will be to avoid running afoul of antitrust/monopoly regulators.
      If growth in AI matches current expectations, and nvidia is the only game in town, yeah that's going to be a problem.

      • They're not the only game in town. Several other companies are moving into AI chips. Some with very deep pockets, and some with extensive experience in such development.

        NVidia don't even make their own chips. They're going to have serious trouble maintaining margin and market share when things start moving. They're currently the Altavista of search engines.

        • NV is the only game in town for ML training right now.
          No competition even comes close right now.
          That will of course change over time, but calling NV Altavista is laughably fucking stupid.
          They're the top dog in this market, because nothing else comes close to competing with their parts for the multi-million dollars job of training large ML models.
          • Altavista was the top dog in their market, because nothing else came close to competing with them. But they couldn't scale.

            Neither can NVidia. They don't even make chips.

            Sure, they're leading now. Like Altavista did. And they'll have huge problems sticking to that lead.

            • by HBI ( 10338492 )

              As usual with fabless entities, they'll have difficulty getting the parts they need out to market in a timely fashion. Intel will crush them, ultimately, if someone else doesn't do it first.

              I wondered a little why this didn't happen during the crypto and blockchain thing, but perhaps it wasn't lucrative enough at that point.

              • As usual with fabless entities, they'll have difficulty getting the parts they need out to market in a timely fashion. Intel will crush them, ultimately, if someone else doesn't do it first.

                What in the fuck are you talking about? lol.
                AMD, NV, and Apple are all fabless.
                Their businesses are doing great.
                Fabless just means you have another supplier in your supply chain. Nothing more, nothing less.

                • by HBI ( 10338492 )

                  Another supplier who can choke off your business if they get a better offer, or geopolitics gets in the way. The future will demonstrate the issue.

                  • Another supplier who can choke off your business if they get a better offer, or geopolitics gets in the way. The future will demonstrate the issue.

                    No business is completely vertically integrated, period.
                    Having a fab is a larger liability than being able to shop for one.
                    Having a fab means you are dependent of very specific qualities of raw resources.

                    With the exception of 5nm and below (TSMC), there are many geopolitically diverse fabs available to a fabless semiconductor company.

            • Altavista was the top dog in their market, because nothing else came close to competing with them. But they couldn't scale.

              Altavista was top dog due to lack of competition.
              NV does and has scaled to stay on top of their very major competition

              The fact that they are fabless is meaningless.
              So is AMD. So are most semiconductor companies.

              • You don't seem to understand the concept of "scale". That hasn't even begun yet. Generative AI use has just started. We're looking at version 0.1 at this point. The first curve hasn't even been reached yet.

                And that most semiconductor companies are fabless is what will drag them back. That doesn't go for only NVidia. It's very far from meaningless when the competition starts.

                • You don't seem to understand the concept of "scale". That hasn't even begun yet. Generative AI use has just started. We're looking at version 0.1 at this point. The first curve hasn't even been reached yet.

                  Complete fucking nonsense.
                  NV's datacenter (AI) sales breached $1B USD 4 fucking years ago.
                  We're at $18B now.
                  Thats scaling

                  And that most semiconductor companies are fabless is what will drag them back. That doesn't go for only NVidia. It's very far from meaningless when the competition starts.

                  Also complete fucking nonsense.
                  Fab-less semiconductor companies are highly competitive. That is an objective fact, not speculation.

                  You seem to be talking out of your ass.

        • Eventually all companies will go the way of Altavista, as long as the market remains free of government interference and regulation.

          The problem is not just the hardware, itâ(TM)s open source software and engineering support as well. Intel and AMD seem to forget that. Talk to nVIDIA sales and youâ(TM)ll be talking to at least an engineer if not a PhD that understands what it is you need. AMD and Intel has âsales engineersâ(TM) which are basically no better than your average shoe salesmen,

    • by gweihir ( 88907 )

      Indeed. Or when enough people have finally realized how incapable and unfixable LLMs really are. Nice party trick, does in no way justify the effort currently invested.

      The reality is that current LLMs are a gradual improvement on 70 years of targeted research. They are an end-result and they are in a final state. Sure, you can throw a bit more hardware at the problem and make the models a bit larger (though not much longer due to model collapse), but you cannot fix the fundamental problems current LLMs have

      • And yet they keep getting better.
        I was just playing with a 70B parameter model last weekend. Trying to pretend like LLMs are a dead end is certainly something you can do, but the economy has made it clear you've just got your head in the sand.
        • by gweihir ( 88907 )

          "Better"? Sure. Good? No way. Yes, they _are_ a dead end. And "the economy" has absolutely no impact on that. Idiots throwing money at a hype does not make that hype into anything good.

          • Good? No way.

            Yup.

            Yes, they _are_ a dead end.

            Nope.

            And "the economy" has absolutely no impact on that.

            The economy is the signal, not a factor. Read better.

            Idiots throwing money at a hype does not make that hype into anything good.

            Pointless argument. LLMs are objectively not hype at this juncture.
            This isn't a "money thrown at..." thing. There are real products with real deployment.

            Come on. Your entire comment was a pile of dumb ass bullshit. You're not that dumb.

            • by gweihir ( 88907 )

              Sorry, but LLMs are an extremely frenzied mindless hype at this point and objectively so.

              I have seen LLMs "perform". I am not impressed at all. IBM Watson could do that 10 years ago. Just not sounding as good.

              • Sorry, but LLMs are an extremely frenzied mindless hype at this point and objectively so.

                That is an incorrect usage of the word objectively.
                As mentioned, there are 180 million users of ChatGPT. Billing products are in mass deployment.
                The precludes it form the possibility of being hype. Insisting on this line of reasoning makes you delusional.

                I have seen LLMs "perform". I am not impressed at all. IBM Watson could do that 10 years ago. Just not sounding as good.

                No, not even close.
                Watson could retrieve information from formatted data that was ingested using NLP. Highly advanced, but still nowhere near as versatile as even a small LLM.

  • When Steve Jobs boasted that he bought a Cray Supercomputer to design the next Macintosh, Seymour Cray said "that's funny, we're designing the next Cray on a Mac."

    For you Gen Z'ers and millennials a Cray Supercomputer cost millions of dollars and was used by top defense and academic institutions. It was the hot thing in the 70s up to the early 90s.

  • I wonder at which point they stop being GPU chips to actually be AI chips? They have made it clear that the GPU market is the least of their concerns
    • I wonder at which point they stop being GPU chips to actually be AI chips? They have made it clear that the GPU market is the least of their concerns

      AIPU?

  • by SuperKendall ( 25149 ) on Monday March 11, 2024 @11:31AM (#64306687)

    There is already an example of a company making AI chips that are significantly (like an order of magnitude) faster than NVidia GPUS - Groq [hindustantimes.com] (not to be confused with the X LLM Grok).

    The chips they are shipping today will run any of the current LLM models and they seem to be ramping up production nicely...

    How long before chips like that start eating into NVidia sales?

    • You're confusing inference with training.
      Anything can do inference. Groqs are currently some really badass inference engines, but you can't use them to train, which is the monopoly that NV holds.
      Anyone seriously training billion-parameter models is using NV, period.
      • You're confusing inference with training.

        Not at all.

        Anything can do inference. Groqs are currently some really badass inference engines, but you can't use them to train

        That is irrelevant, since most of what people want to do is actually use models, which means inference.

        And not just anything can do inference FAST, which is what Groq does. Neither CPUs nor GPUs are nearly as fast as Groq chips.

        Training itself is a big task yes, but tiny compared to the scale of using trained models. I'm sure Nvidia will m

        • Not at all.

          Uh, lol.
          You mentioned Groq as an example of an "AI chip that is faster than NV"
          Groq makes inference chips. NV's lead is not inference. They're not in the market of inference.

          That is irrelevant, since most of what people want to do is actually use models, which means inference.

          This is a stupid fucking statement.
          People do inference on their phones. It's done all over the place. It doesn't take any particularly special hardware to run a quantized model.
          Jimbob Billy can run Stable Diffusion on their fucking iPhone.
          Nothing about this discussion was about inference.

          And not just anything can do inference FAST, which is what Groq does. Neither CPUs nor GPUs are nearly as fast as Groq chips.

          There are tons of fast inference devices. Th

          • Simple question: If what you say is true how is Groq managing to sell its chips into data centers? How is it that they are running models faster than other solutions currently (since Groq online smokes ChatGPT in response time)?

            Do you deny that companies are using Nvidia hardware today, to run inference in data centers?

            Ponder those questions and maybe think more deeply about what you are saying.

            • Simple question: If what you say is true how is Groq managing to sell its chips into data centers?

              Groq has sold $4M in parts.
              That's about what NV sells in 6 hours.

              How did they do it? Because people are curious. That's how markets work.

              How is it that they are running models faster than other solutions currently (since Groq online smokes ChatGPT in response time)?

              You can't compare what some individual can get with their Cloud LLM compared to what a company with 180 million users is getting, because we have no idea what their resource distribution is.
              Flatly speaking, a single NV A100 can "smoke" Groq's inference times using its TPUs with sparse matrices. Though that's throwing money into a fire if you're buying an A100 for low-p

        • That is irrelevant, since most of what people want to do is actually use models, which means inference.

          *sigh* tell us you don't know the topic being discussed without saying you don't know the topic being discussed. No NVIDIA is not talking about inference to its investors, and no your mom and dad are not buying A100s, which is precisely the thing that NVIDIA is talking about.

          NVIDIA doesn't give a shit about inference, and neither does shareholders. This discussion is entirely about training. And despite your assertion that it's a "tiny" task, it seems to be one that has caused a complete backlog in producti

  • Well, there are tons of people that will believe his crap. In actual reality, all Nvidia can ensure the artificial idiot is dumb a little faster. Great achievement.

  • bullshit!!!
  • by iAmWaySmarterThanYou ( 10095012 ) on Monday March 11, 2024 @12:34PM (#64306917)

    Capex is a large but mostly invariant expense in data center costs over the long term.

    By invariant, I mean, it's rarely worth cutting corners on hardware speed/quality/etc to save a few bucks. For example, if I'm buying a 40 server rack and put in cpus that are 20% slower, the money saved is rarely worth it vs spending 50% more for the faster cpus. If you're cup bound in anyway you'll make use of it and require fewer servers. Also your faster servers will be useful for longer. So, if say, each box cost an extra grand, that's only 40k once for that rack. But buying servers sooner or more servers will cost a lot more than that 40k. Plus administrative hassle of more servers, extra rack space, etc.

    Same thing is true for gpus for AI, storage (all available storage will always find or create data to be stored on it), etc, etc.

    Opex is where costs stack up. If I need an extra rack because I was cheap as on hardware that rack will very quickly cost more than having fewer servers that perform better.

    Ymmv, there are always edge cases etc, but generally this is how data center costs work.

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...