Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

AI's Future and Nvidia's Fortunes Ride on the Race To Pack More Chips Into One Place (yahoo.com) 21

Leading technology companies are dramatically expanding their AI capabilities by building multibillion-dollar "super clusters" packed with unprecedented numbers of Nvidia's AI processors. Elon Musk's xAI recently constructed Colossus, a supercomputer containing 100,000 Nvidia Hopper chips, while Meta CEO Mark Zuckerberg claims his company operates an even larger system for training advanced AI models. The push toward massive chip clusters has helped drive Nvidia's quarterly revenue from $7 billion to over $35 billion in two years, making it the world's most valuable public company.

WSJ adds: Nvidia Chief Executive Jensen Huang said in a call with analysts following its earnings Wednesday that there was still plenty of room for so-called AI foundation models to improve with larger-scale computing setups. He predicted continued investment as the company transitions to its next-generation AI chips, called Blackwell, which are several times as powerful as its current chips.

Huang said that while the biggest clusters for training for giant AI models now top out at around 100,000 of Nvidia's current chips, "the next generation starts at around 100,000 Blackwells. And so that gives you a sense of where the industry is moving."

This discussion has been archived. No new comments can be posted.

AI's Future and Nvidia's Fortunes Ride on the Race To Pack More Chips Into One Place

Comments Filter:
  • by gweihir ( 88907 ) on Monday November 25, 2024 @10:30AM (#64970591)

    At least the LLM-variant does not beyond somewhat better search, generation of crappy text and images and crappy code, with the occasional hallucination thrown in. I seriously doubt that will be enough to justify the cost. Sure, eventually, the tech may become cheap enough and then it may play a minor role, but that time has not arrived.

    • I disagree in that I don't think AI will go away - and yet I still don't think NVidia can continue like this. It's like the 1990's fiber buildout for the Internet. The Internet didn't fade away, but it turns out only so much infrastructure was needed to make it go.

      See also: Sun Microsystems - "The network is the computer!" Turns out they were right. Didn't help them though.

      Self-driving cars might be a wildcard since it has to be done locally (inference, not training) but I still doubt there will be m

    • At least the LLM-variant does not beyond somewhat better search, generation of crappy text and images and crappy code, with the occasional hallucination thrown in. I seriously doubt that will be enough to justify the cost. Sure, eventually, the tech may become cheap enough and then it may play a minor role, but that time has not arrived.

      The research in AI is still evolving. Future progress will depend most on (1) the invention of new and different AI models and architectures and (2) the discovery of new use cases. Sure, simply scaling out the number of chips and memory with the same architectures is likely to at best incremental. Transformers were invented a few years ago. Why do we think that transformers are the very last possible significant invention? And all this talk about AGI. That's what journalists, scifi writers, and non-in

      • by gweihir ( 88907 )

        You are aware that AI research has been going on for about 70 years, right? There are _no_ low-hanging-fruits left.

      • Do you know exactly how old are artificial neural networks? Artificial neural networks were first theorized in the early 40s. The first practical applications (image and speech recognition) were implemented in the 60s (yes...more than 60 years ago let that sink in). Look up SHDRLU to get an idea of what those ANNs were able to do with computers from 60 years ago. What OpenAI and other corps are doing now is just deploying extremely large models on hardware that have enormous quantities of memory and CPU cor
        • by gweihir ( 88907 )

          That nicely sums it up. We are currently seeing a straw-fire from scaling up the hardware to an incredible degree with some tricks to make computation more efficient. The actual mechanisms are the very old ones and subject to the same fundamental limitations. Not even "hallucinations" are new. IBM Watson has them 13 years ago, which is why a project to have it design medical treatment plans ultimately failed.

    • Costs are 20% of what they were a year ago while the models are getting smarter. It's already worth it for me to use every day. I only hear your perspective from luddites who don't actually use LLMs and don't understand the capabilities.
      • by gweihir ( 88907 )

        I only hear your perspective from luddites who don't actually use LLMs and don't understand the capabilities.

        Then you are not listening. No surprise, really.

  • by Somervillain ( 4719341 ) on Monday November 25, 2024 @10:38AM (#64970609)
    Is lack of computing capability keeping the GenAIs from achieving the next level of progress?...or has their potential been greatly oversold? (sincere question for the group) I am personally a GenAI skeptic, but open minded that maybe the future will prove me wrong. It seems like Generative AI is a complete crapshoot as to whether or not it can give a correct answer...even on the most popular models and most popular questions.

    Logically, if only time and computing power were limiting it, you'd assume Generative AI would be ROCK SOLID in a few simple and common areas and more scatter shot everywhere else. Everything I've tried it for, both easy and hard, I saw at least one failure per question.

    I basically don't feel like I can trust any AI I've seen. Even the IntelliJ one...often suggests code that doesn't even compile...or resemble working...in things like the core IO libs...not that JetBrains is the industry gold standard, but you'd think if your domain was a single core JDK API and all you do is train Java and there's a fuckton of open source Java in the world using that core API...you'd at least be able to figure out a String can't be passed where it's expecting an int or a long.
    • If AI cant advance to something magical soon at least sufficient to recoup the investments there could be some big impairments. AI bringing in some cash but not at the speed consuming
    • By itself I don't think it will solve key gaps even if more horse-power added, but find a way to integrate it with logic engines and knowledge bases, say Cyc Project, and then it can attack problems from multiple angles that pattern-matching alone just can't.

  • ...CEOs and investors understand that AI is a long term research project with a significant chance of failure and little chance of short term profit

  • Not bad for something that plays Quake without the softeare renderer.

  • In the early days of the computer industry, there were those big computers and the mini computers. Everything was centralized, and you had dumb terminals to allow people to make use of the computers. Then, the age of the PC hit, suddenly you had things that would work on PCs, and then you had networking to let individual PCs to talk to each other or to a server. The idea of client-server has always made sense for those workloads that can't be handled by less powerful machines. We also have clusters,

  • I was lured out of retirement to build these monsters.

    Current capabilities are not yet defined. Until folks know the ultimate performance envelope, any application or lateral move outside of core AI can be obsoleted almost instantly.

    You'll see this continue until that leveling happens, or the AI starts to improve itself and there's a clear correlation between capabilities and power consumption.

    Until then, it's wide open throttle. Most people either are having a very hard time accepting they're going to be r

    • ... any application or lateral move outside of core AI can be obsoleted almost instantly.

      I'm considering the context of that claim. Right now, LLMs support very good search and summary. They are useful for creating prototypes, but the prototypes need careful review. They are currently untrustworthy for production-quality data/code/information.

      I guess the only forward path is for them to improve in quality until they are good at production. (The economic incentives for companies are too great NOT to i

  • Packing more chips is how I got obese. [slashdot.org]

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (3) Ha, ha, I can't believe they're actually going to adopt this sucker.

Working...