Sundar Pichai Says Google and Nvidia Will Still Be Working Together 10 Years From Now (cnbc.com) 16
Sundar Pichai said Google's longstanding relationship with chipmaker Nvidia isn't going to change any time soon -- in fact, he expects it to continue over the next 10 years. From a report: In an interview Wired published Monday, the Google CEO said the company worked "deeply" with Nvidia on Android and other initiatives for over a decade, adding that Nvidia has a "strong track record" with AI innovation. "Look, the semiconductor industry is a very dynamic, cooperative industry," Pichai said. "It's an industry that needs deep, long-term R&D and investments. I feel comfortable about our relationship with Nvidia and that we are going to be working closely with them 10 years from now."
Soooooo (Score:3)
Very affordable GPUs (Score:3)
A card with 8x H100 is only $200K.
Re: (Score:2)
But can it run crysis?
Two peas in a pod (Score:2)
Sure, why not. Both companies treat their user base the same. Their goals are the same: greed.
Translation: (Score:5, Interesting)
Here's the translation of all this chummy good will: Google learned that making a processor that's worth a damn while still having great power efficiency isn't as easy as they were led to believe. Don't expect their "Tensor" to overtake Nvidia in literally anything, any time soon.
Re: (Score:2)
1. the fabric matters. quad-channel LPDDR5 memory on a Cloudripper versus eight channel LPDDR5X on a GH200 plus NVlink to connect many of them together at a low level.
2. eight cores in an unusual configuration of big, medium, little. Compared to 72 cores that are equivalent to the family of the previously mentioned "big" cores.
3. 480GB of RAM shared by GPU and CPU in a GH200. I couldn't find Cloudripper's max, but it's more than 12 GiB. (probably 64, and less than 256)
4. ready to run products like DGX GH200
Re: (Score:2)
The shared memory doesn't do a whole lot for you except make programming a little easier. Off GPU bandwidth is so tiny, that shared memory or message passing are equivalent performance wise.
Large models are handled through pipelining.
Re: (Score:2)
Message passing between CUDA kernels? Yeah, that's not how it's done.
Re: (Score:2)
One extra on top of that, "The moment we hire enough of their engineers or figure out how to do it ourselves, we're dumping those mother fuckers! We are embarrassed and humiliated we can't already build our own processors and that we have to publicly kiss ass. This is sooooo wrong!"
Re: (Score:2)
Or just maybe ... (Score:2)
Or not (Score:2)
This statement and $1 (Score:2)