Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Nvidia CEO Reveals GPU and Software Moat in AI Chips 24

Nvidia is banking on its software expertise and broad GPU ecosystem to stay ahead in the fiercely competitive AI chip market, CEO Jensen Huang said in an interview with Goldman Sachs Wednesday. Huang pointed to NVIDIA's large base of installed GPUs and their software compatibility as key strengths.

Huang highlighted three key elements of Nvidia's competitive moat: a large installed base of GPUs across multiple platforms, the ability to enhance hardware with software like domain-specific libraries, and expertise in building rack-level systems. The CEO said Nvidia's chip design prowess, noting the company has developed seven different chips for its upcoming Blackwell platform.

These comments come as Nvidia faces increasing competition from rivals. Addressing supply chain concerns, Huang said NVIDIA has sufficient in-house intellectual property to shift manufacturing if necessary without significant disruption. The company plans to begin shipping Blackwell-based products in the fourth quarter of fiscal 2025, with volume production ramping up in fiscal 2026, according to Huang.

From the note that Goldman Sachs sent to its clients: 1) Accelerated Computing: Mr. Huang highlighted his long-held view that Moore's Law was no longer delivering the rate of innovation it had in the past and, as such, was driving computation inflation in Data Centers. Further, he noted that the densification and acceleration of the $1 trillion data center infrastructure installed base alone would drive growth over the next 10 years, as it would deliver material performance improvement and/or cost savings.

2) Customer ROI: Mr. Huang noted that we have hit the end of transistor scaling that enabled better utilization rates and cost reductions in the previous virtualization and cloud computing cycles. He explained that, while using a GPU to augment a CPU will drive an increase in cost in absolute terms (~2x) in the case of Spark (distributed processing system and analytics engine for big data), the net cost benefit could be as large as ~10x for an application like Spark given the speed up of ~20x. From a revenue generation perspective, Mr. Huang shared that hyperscale customers can generate $5 in rental revenue for every $1 spent on Nvidia's infrastructure, given sustained strength in the demand for accelerated computing.
This discussion has been archived. No new comments can be posted.

Nvidia CEO Reveals GPU and Software Moat in AI Chips

Comments Filter:
  • I remember when the custom ASICs for Bitcoin and Ethereum hit and all of a sudden I can pick up an RX 580 for under a hundred bucks. Nvidia's in a dangerous position where if LLM's switch to custom hardware they're going to see the bottom completely drop out of their market.

    I suspect that's why AMD doesn't just borrow a ton of money to hire engineers and go all in on pushing their graphics cards for AI. It's the kind of bet that would destroy a company.
    • They've been around for years. Google has custom ASIC AI chips, as does Amazon, and Apple is working on their own. A few startups are making similar designs for purchase. Huggingface already supports running notebooks on AWS, and work is almost done getting their libraries to work on Amazon's native hardware.

      • Having competition "sort of" is probably the best place they could be. They're not quite a monopoly, there's just making about 90% as much money as if they were. Moving the whole world off CUDA would be really, really hard at this point.
      • Are the start ups like Chat GTP just not ready with their own ASICs? At least for bitcoin/ethereum the difference in performance per watt was insane.

        What's keeping nvidia stock from crashing? Are there some things ASICs can't do? Or have investors just not caught up with the reality of the situation?
        • by silentbozo ( 542534 ) on Wednesday September 11, 2024 @03:25PM (#64781289) Journal

          CUDA for one. There's a large supported ecosystem for using CUDA for GPU compute. AMD just announced a competing effort ( https://winbuzzer.com/2024/09/... [winbuzzer.com] ) but yeah... it takes time and effort to port libraries to work with those.

          Then there's Nvidia's existing supplier/distributor network - they can mass manufacture in volume because they know they can sell all of their units. This gives them a cost advantage in terms of packaging.

          The other one (this is my guess) is the same one that's bedeviling everybody. Lack of fab capacity. You want to do a special run of high density large wafer silicon, it's gonna cost you.

          For those reasons, the new kids on the block are going to have to either get serious money behind them to prove that they are a less risky option, or there has to be a hard constraint on delivery or power availability in datacenters that would dictate having to pivot to an alternate supplier. In the meantime Nvidia is cognizant of this threat and is moving to deliver better performance per watt...

          As an FYI: judging by the number of husked 4090 parts on sale on ebay (where the VRAM and the GPU have been harvested for reuse on custom boards in China to bypass the US trade restriction on high performance computing - I've heard reports that they take this opportunity to double the VRAM from 24 to 48 as well...), there's both plenty of demand and apparently sufficently ingenious suppliers meeting said demand, at least for now.

          I think eventually yes, there will be tensor processing units (TPUs) available to the public, and not just in small lots to enterprise customers. In the meantime, there's enough demand that I think Nvidia will continue to maximize their sales...

          If you want to read up more on their competition, here's an article from earlier this year:

          https://www.forbes.com/sites/k... [forbes.com]

          Finally, if we look back at crypto... the people who initially produced mining asics took advantage of the fact that they had them and other people didn't, to mine the shit out of them before selling them to the general public, exploiting the supply differential. I anticipate something similar, where the people who produce these new TPUs will lease time on them instead of selling them in order to exploit their comput availability/cost vs. Nvidia based datacenter compute. It won't be until the bubble is popped that they'll finally start selling TPUs instead of leasing time on them, at which point the market will completely collapse. At that point, Nvidia will still have a market for video cards, while the TPU prices will drop quickly. That's my prediction... but I have no idea how long it will take to reach that point.

          • by JBMcB ( 73720 )

            CUDA for one. There's a large supported ecosystem for using CUDA for GPU compute. AMD just announced a competing effort ( https://winbuzzer.com/2024/09/... [winbuzzer.com] ) but yeah... it takes time and effort to port libraries to work with those.

            IMHO it's 90% CUDA. AMD's and Intel's compute strategies have been disasters. AMD cycled through three or four frameworks over the last ten or fifteen years. Intel has gone through as many entire architectures. It looks like things have stabilized on ROCm and oneAPI with Intel's GPUs and Xeons, which basically wrap up all their previous efforts into single, bloated interfaces. They aren't great, but it's much better than what they've had.

            An equalizing factor going forward is that development is moving up th

      • They've been around for years. Google has custom ASIC AI chips, as does Amazon, and Apple is working on their own. A few startups are making similar designs for purchase. Huggingface already supports running notebooks on AWS, and work is almost done getting their libraries to work on Amazon's native hardware.

        The current competitive situation is optimal for Nvidia. There are a huge number of players trying to compete with Nvidia, and this huge interest reflects the still increasing total addressable market for AI accelerators. At the same time, these competitors didn't just start trying to compete in the last year. The big players have been trying to compete for many years. Some like Google have been doing this for a decade, with many generations of chips. This suggests the obvious, which is that it's super

        • Within a few years a clear winner in the LLM market will most likely rise up. They probably won't be as good as several of their competitors but they'll have market advantages like more money and maybe a clear win in one market they can leverage to make wins in another market.

          If we happen to have strong antitrust law enforcement that won't happen because they won't get away with leveraging their market dominance but I'm skeptical whether we're going to have that in the coming years.

          So if that happen
    • by gweihir ( 88907 )

      It is. But Nvidia was never into sound business practices or solid engineering. Not unlike Intel.

    • by Luckyo ( 1726890 )

      Problem with highly targeted hardware accelerators is that when software makes the next developmental step, they are all collectively left behind. So it only makes sense to target them at relatively static things. I.e. static target bitcoin and ethereum mining yes, rapidly developing AI models not so much.

      There's a place for dedicated hardware accelerators for AI too, but that's mostly for specific models that are mostly done, and where limiting development to "this specific accelerator" isn't that big of a

  • The tech industry needs more Xopolies like a hole in the chip.

  • Soooo deliberately anticompetitive behavior? That's hilarious. They basically have to tell their own shareholders that it's impossible for folks to use their products outside of their walled garden while they're simultaneously fighting off an antitrust probe based on this kind of practice.

    I also want to point that marketing is a ponzi scheme that seems to be failing at the highest levels. The concept of a "brand moat" is my favorite marketing scam because it's blatantly obvious how bad of an idea it is.
    • Soooo deliberately anticompetitive behavior? That's hilarious. They basically have to tell their own shareholders that it's impossible for folks to use their products outside of their walled garden while they're simultaneously fighting off an antitrust probe based on this kind of practice.

      No, that's Apple, who goes out of their way to create an artificial incompatibility moat when one doesn't naturally exist. In contrast, Nvidia's moat arises from Nvidia ignoring competitors and optimizing for their own systems. The moat exists because the design and optimization is complex and hard to replicate. It's possible to use Nvidia hardware with non-Nvidia software, but it likely won't be as performant and might require some tweaks. It remains to be seen whether the courts will view this as ille

      • by gtall ( 79522 )

        "performant"...dead giveaway you are some sort of marketdroid.

      • I think of it this way. Nvidia was cultivating the field for AI for years before the current AI craze.

        AI is finally a thing (well, it's the current generation of AI is finally a thing), and they're maximally poised to take advantage of it because they spent years investing in building an ecosystem to use what they are producing, plus the ecosystem to build and distribute said product. And they have a pipeline full of better products in the making.

        It's like all the people calling SpaceX a monopoly because

    • by CAIMLAS ( 41445 )

      It's not explicitly anti-competitive practice. It's their own technology, which is open and accessible. You don't need to pay a license to make your CUDA applications compatible.

      It's also not a ponzi scheme. You're simply using that term wrong.

      In this case, explain how AMD is able to run CUDA directly on their own chips? That's not anti-competitive at all. In fact, nvidia just announced they're supporting it directly.

      https://www.reddit.com/r/Amd/comments/1e5d66p/nvidia_cuda_can_now_directly_run_on_amd_gpus/

If mathematically you end up with the wrong answer, try multiplying by the page number.

Working...