Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Microsoft IT Technology

Microsoft Acquires Twice as Many Nvidia AI Chips as Tech Rivals (ft.com) 12

Microsoft bought twice as many of Nvidia's flagship chips as any of its largest rivals in the US and China this year, as OpenAI's biggest investor accelerated its investment in artificial intelligence infrastructure. From a report: Analysts at Omdia, a technology consultancy, estimate that Microsoft bought 485,000 of Nvidia's "Hopper" chips this year. That put Microsoft far ahead of Nvidia's next biggest US customer Meta, which bought 224,000 Hopper chips, as well as its cloud computing rivals Amazon and Google.

With demand outstripping supply of Nvidia's most advanced graphics processing units for much of the past two years, Microsoft's chip hoard has given it an edge in the race to build the next generation of AI systems. This year, Big Tech companies have spent tens of billions of dollars on data centres running Nvidia's latest chips, which have become the hottest commodity in Silicon Valley since the debut of ChatGPT two years ago kick-started an unprecedented surge of investment in AI.

Microsoft Acquires Twice as Many Nvidia AI Chips as Tech Rivals

Comments Filter:
  • by alvinrod ( 889928 ) on Wednesday December 18, 2024 @10:18AM (#65022225)
    Are these chips doing anything useful or are they just building a larger language model that will still completely flub any question it hasn't seen before even if it's something a ten year old could work through?

    If we're not already at the point of diminishing returns for the current approach, we'll be there soon. Though I will admit the generative AI programs that let anyone shitpost on Twitter and not just people with rudimentary Photoshop skills are certainly nice. Perhaps not worth the tens of billions of dollars of investment, but appreciated all the same.
    • Current multi-modal models are pretty fucking incredible.
      You should give em a shot.
      If you've got a GPU that can do it- LLaVa 1.6 32B 8b and Qwen 2.5 Coder 32B FP32 are available on hugging face. With a little prompt engineering, you can get them to write code to draw a picture you give it in any language you can imagine. My boss rendered in pure HTML is pretty amusing and impressive.

      As for the flubbing of questions- I think that's probably a misuse of the technology, really. Don't ask these things for f
      • With a little prompt engineering, you can get them to write code to draw a picture you give it in any language you can imagine.

        There's a market for this? No wonder I never understood AI.

        • That exactly? Of course not- that was an example of the first thing I accomplished with local LLMs that convinced me these things had serious utility other than just things to ask if you answers with randomly distributed anti-facts within them.

          You could also give it a picture of a graph- and ask it to produce a spreadsheet with its values. And it will. And they'll be as correct as you could guess from that same image.

          The possibilities are pretty endless when it comes to data processing.

          One neat one I
          • sorry! I was poking fun. both at the AI hype and my own ignorance.

            While I've seen some really interesting applications for AI. Like for translation and more natural sounding speech synthesis.
            There is also a lot of stuff going on that smells like a solution searching for a problem.

            For data processing. I'm not sure I trust it unless I can verify the results or methodology. Since AI systems are a black box, I can't really examine the methodology used to arrive at the result.

            • sorry! I was poking fun. both at the AI hype and my own ignorance.

              lol- fair.

              There is also a lot of stuff going on that smells like a solution searching for a problem.

              I'd say my own personal use of it is exactly that....
              However, in that toying around, I have found problems to which it was a solution.
              Code translation, for example.
              Or one case, where I wanted it to make me a certain shape on an HTML5 <canvas> in javascript... stupid things like that. It did legitimately save time.
              And once, I did feed it a chart that it converted into a CSV that I was able to process. I re-generated a chart from the resulting CSV just to make sure it didn't feed me bullsh

      • I tried to get Perplexity to write a XMMS plugin. It gave code, but it didn't work. It was close but took some debugging. Every time I asked it, it gave a completely different approach (good ol non-deterministic). Once it made extensive use of static variables, holding 1's and 0's for if statement boolean expressions, which I thought was kind of amateur (not that I'm any kind of programming expert). In all fairness it did give a good starting point. IMO its good for experts, anyone else not so much as you h
        • Ya- they're non-deterministic. They're fed random numbers.
          I get why it's done, but sometimes they'd give you the seed and allow you to pin it.

          Which LLM were you using via Perplexity? In-house? GPT3.5?
          GPT4o and Llama 3.3 are vastly better at creating code, both in "doing what you want", and "not having mistakes".

          Also, ya, amateur-ish code is common when you're using code generation.
  • by Rosco P. Coltrane ( 209368 ) on Wednesday December 18, 2024 @11:55AM (#65022475)

    when the AI bubble finally bursts.

  • You have to understand where MS is coming from.
    They completely underestimated the impact of the internet on PCs. It was a nothing but a curiosity to them at the time, and they discounted it for years, putting them behind.

    With AI, they don't want that to happen again. At almost any cost.
    Of course, there is a huge chance that it's all hype. If so, they are pretty well screwed.

    'World War 3.0' is a good book about that period.

  • And they compare on par with Nvidia according to this article (graph halfway down page): https://www.datacamp.com/blog/... [datacamp.com] Only drawback is you have to purchase cloud compute to access them. Never understood why Google does not sell them direct to consumers - guessing they are probably a pain to configure.

Did you know that for the price of a 280-Z you can buy two Z-80's? -- P.J. Plauger

Working...