Microsoft Readies AI Chip as Machine Learning Costs Surge (theinformation.com) 12
After placing an early bet on OpenAI, the creator of ChatGPT, Microsoft has another secret weapon in its arsenal: its own artificial intelligence chip for powering the large-language models responsible for understanding and generating humanlike language. The Information: The software giant has been developing the chip, internally code-named Athena, since as early as 2019, according to two people with direct knowledge of the project. The chips are already available to a small group of Microsoft and OpenAI employees, who are testing the technology, one of them said. Microsoft is hoping the chip will perform better than what it currently buys from other vendors, saving it time and money on its costly AI efforts. Other prominent tech companies, including Amazon, Google and Facebook, also make their own in-house chips for AI. The chips -- which are designed for training software such as large-language models, along with supporting inference, when the models use the intelligence they acquire in training to respond to new data -- could also relieve a shortage of the specialized computers that can handle the processing needed for AI software. That shortage, reflecting the fact that primarily just one company, Nvidia, makes such chips, is felt across tech. It has forced Microsoft to ration its computers for some internal teams, The Information has reported.
Will AI chips all be custom? (Score:2)
Perhaps someone a little closer to the AI chip/software industry could comment on this. Are Nvidia's chips too general purpose and therefore not as efficient for AI? Or are these AI training systems demanding a chip specific to the particular algorithms of that training program? In other words, will all of these chips be custom moving forward, at least for any big players?
Certainly, Nvidia's margins provide reason enough for anyone purchasing enough of their chips to at least consider making their own.
Re: Will AI chips all be custom? (Score:2)
Re: (Score:2)
General or flexible? (Score:3)
Perhaps someone a little closer to the AI chip/software industry could comment on this. Are Nvidia's chips too general purpose and therefore not as efficient for AI? Or are these AI training systems demanding a chip specific to the particular algorithms of that training program? In other words, will all of these chips be custom moving forward, at least for any big players?
Certainly, Nvidia's margins provide reason enough for anyone purchasing enough of their chips to at least consider making their own.
As a general rule, ASICs are more efficient than general-purpose processors. So, why are general-purpose processors so universal? The answer is that ASICs are great when the task never changes, like with an embedded system. However, in a moderately evolving software environment, ASICs quickly become either inefficient or just incapable of new functionality. And that's one of the main challenges with AI algorithms which are evolving much faster than most software. Trying to design an AI processor with A
Re: (Score:2)
GPUs are pretty good at doing multiply add operations, but they do a lot of other things too. You can make smaller, more energy efficient, cheaper chips if you make them for a specific purpose.
Is there the equivalent of x86 & ARM machine (Score:1)
...language for AI chips? It's probably best to settle on a standard early if we wish to propel the industry.
Not ready (Score:1)
custom silicon? (Score:3)
Seems like custom silicon would only be needed at scale, for cost savings. Why not use an FPGA?
Full article? this is behind paywall (Score:2)
Microsoft do not want to miss this one (Score:1)
- The Internet big bang
- The mobile big bang
- The cloud big bang (well, here it seems to struggle better)
So... this is it... the third time's a charm... I mean, the fourth...