Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Microsoft AI Technology

Microsoft Readies AI Chip as Machine Learning Costs Surge (theinformation.com) 12

After placing an early bet on OpenAI, the creator of ChatGPT, Microsoft has another secret weapon in its arsenal: its own artificial intelligence chip for powering the large-language models responsible for understanding and generating humanlike language. The Information: The software giant has been developing the chip, internally code-named Athena, since as early as 2019, according to two people with direct knowledge of the project. The chips are already available to a small group of Microsoft and OpenAI employees, who are testing the technology, one of them said. Microsoft is hoping the chip will perform better than what it currently buys from other vendors, saving it time and money on its costly AI efforts. Other prominent tech companies, including Amazon, Google and Facebook, also make their own in-house chips for AI. The chips -- which are designed for training software such as large-language models, along with supporting inference, when the models use the intelligence they acquire in training to respond to new data -- could also relieve a shortage of the specialized computers that can handle the processing needed for AI software. That shortage, reflecting the fact that primarily just one company, Nvidia, makes such chips, is felt across tech. It has forced Microsoft to ration its computers for some internal teams, The Information has reported.
This discussion has been archived. No new comments can be posted.

Microsoft Readies AI Chip as Machine Learning Costs Surge

Comments Filter:
  • Perhaps someone a little closer to the AI chip/software industry could comment on this. Are Nvidia's chips too general purpose and therefore not as efficient for AI? Or are these AI training systems demanding a chip specific to the particular algorithms of that training program? In other words, will all of these chips be custom moving forward, at least for any big players?

    Certainly, Nvidia's margins provide reason enough for anyone purchasing enough of their chips to at least consider making their own.

    • https://en.m.wikipedia.org/wik... [wikipedia.org] It's just optimized for a different use case.
    • I am betting its just a cost, the a100 and h100 are just going to be so damn expensive when you are buying thousands and thousands of racks of them.. MS is depending on NVidia for so much of their business going forward, this is almost certainly a well timed cost play.
    • Perhaps someone a little closer to the AI chip/software industry could comment on this. Are Nvidia's chips too general purpose and therefore not as efficient for AI? Or are these AI training systems demanding a chip specific to the particular algorithms of that training program? In other words, will all of these chips be custom moving forward, at least for any big players?

      Certainly, Nvidia's margins provide reason enough for anyone purchasing enough of their chips to at least consider making their own.

      As a general rule, ASICs are more efficient than general-purpose processors. So, why are general-purpose processors so universal? The answer is that ASICs are great when the task never changes, like with an embedded system. However, in a moderately evolving software environment, ASICs quickly become either inefficient or just incapable of new functionality. And that's one of the main challenges with AI algorithms which are evolving much faster than most software. Trying to design an AI processor with A

    • by ceoyoyo ( 59147 )

      GPUs are pretty good at doing multiply add operations, but they do a lot of other things too. You can make smaller, more energy efficient, cheaper chips if you make them for a specific purpose.

  • ...language for AI chips? It's probably best to settle on a standard early if we wish to propel the industry.

  • The fact that they built a huge cluster of Nvidia for OpenAI indicates that they're no where near ready to deploy this. This transition will likely happen over a few years, and they have to keep on cranking out faster parts to keep up with Nvidia to justify the spending.
  • by awwshit ( 6214476 ) on Tuesday April 18, 2023 @02:22PM (#63459722)

    Seems like custom silicon would only be needed at scale, for cost savings. Why not use an FPGA?

  • Can someone post the full article. Its behind a paywall.
  • After putting a computer in every home, Microsoft missed:

    - The Internet big bang
    - The mobile big bang
    - The cloud big bang (well, here it seems to struggle better)

    So... this is it... the third time's a charm... I mean, the fourth...

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...