Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Google Intel Technology

Behind the Plot To Break Nvidia's Grip on AI By Targeting Software (reuters.com) 44

An anonymous reader shares a report: Nvidia earned its $2.2 trillion market cap by producing AI chips that have become the lifeblood powering the new era of generative AI developers from startups to Microsoft, OpenAI and Google parent Alphabet. Almost as important to its hardware is the company's nearly 20 years' worth of computer code, which helps make competition with the company nearly impossible. More than 4 million global developers rely on Nvidia's CUDA software platform to build AI and other apps. Now a coalition of tech companies that includes Qualcomm, Google and Intel, plans to loosen Nvidia's chokehold by going after the chip giant's secret weapon: the software that keeps developers tied to Nvidia chips.

They are part of an expanding group of financiers and companies hacking away at Nvidia's dominance in AI. "We're actually showing developers how you migrate out from an Nvidia platform," Vinesh Sukumar, Qualcomm's head of AI and machine learning, said in an interview with Reuters. Starting with a piece of technology developed by Intel called OneAPI, the UXL Foundation, a consortium of tech companies, plans to build a suite of software and tools that will be able to power multiple types of AI accelerator chips, executives involved with the group told Reuters. The open-source project aims to make computer code run on any machine, regardless of what chip and hardware powers it.

"It's about specifically - in the context of machine learning frameworks - how do we create an open ecosystem, and promote productivity and choice in hardware," Google's director and chief technologist of high-performance computing, Bill Hugo, told Reuters in an interview. Google is one of the founding members of UXL and helps determine the technical direction of the project, Hugo said. UXL's technical steering committee is preparing to nail down technical specifications in the first half of this year. Engineers plan to refine the technical details to a "mature" state by the end of the year, executives said. These executives stressed the need to build a solid foundation to include contributions from multiple companies that can also be deployed on any chip or hardware.

This discussion has been archived. No new comments can be posted.

Behind the Plot To Break Nvidia's Grip on AI By Targeting Software

Comments Filter:
  • Google's TPU are amazing but they run every decision through their DEI directorate, which means that these engineers are eminently poachable.
    • Re: (Score:2, Flamebait)

      by Eunomion ( 8640039 )
      "The beating of a Russian spambot. Here's an idea for a game show: We pile all the Russian spambots spamming irrelevant bullshit about 'DEI' into a pit full of starving dogs..."
      • The spambot's not wrong in this case. At some point somebody did an image search for a gorilla and got a Black person, and everybody at Google shit their pants.

        People speak of how Sears was ideally positioned to become what Amazon is today, and I think the same analogy applies here. Google should have become what OpenAI is now, but they held themselves back out a combination of irrational fear and plain old laziness.

        • LOL! Your claim is that the company that made a point of publicly removing the "Don't be evil" motto from their offices, for no reason whatsoever and with no possible gain, was so afraid of public opinion that...

          Ugh, forget it. You're at very best a deluded racist idiot, and probably the version who thinks everyone else is even more of one than yourself.
          • You almost had to complete a thought but luckily were able to avoid doing so by using your own trigger word "racist". So far all you've done is claim russian bots and racists to avoid cognitive dissonance and signal your tribal allegiance. You haven't actually formed a coherent argument.
            • Your story is that the word "racist" is too simplistic to describe racists? Maybe we need to add some euphemisms to protect their feelings or elaborate the complexity of their being? I.e., the thing that drives them batshit if literally anyone else says about themselves?

              You turd.
              • Again you just use your own trigger word to soothe your own cognitive dissonance and signal tribal allegiance. You seem to want to only shut down conversations that make you uncomfortable in your own beliefs.
              • I'll help you with a leading question. How does being against the institutionalization of racism make someone a racist exactly?
                • When your ilk get around to admitting to yourselves that slavery was bad, I'll get around to judging myself by Star Trek standards. You schizoid fuck.
                  • You can't coherently address the topic and just want to insult anyone you perceive as having a differing opinion from you to signal your holy in-group status. What's funny is the hypocrisy of your insult is obvious to any reasonable person not a part of your fringe religion and only serves to elevate those you are trying to bring down. Keep it up!
              • You're not helping.

                • Yes, I am helping. The world is damn tired of tiptoeing around the sensitivities of childish minds whose entire worldview consists of mocking the very concept of sensitivity only when the subject is other people.

                  The moment the shoe's on the other foot, they become a throbbing forehead vein of insecurities and special pleading for whatever they imagine makes them a victim. Meanwhile believing every hate-fueled conspiracy theory against every powerless group who dares to enjoy their sunlight without ask
        • At some point somebody did an image search for a gorilla and got a Black person, and everybody at Google shit their pants.

          That was Microsoft, not Google: https://money.cnn.com/1999/06/... [cnn.com]

      • Was that satire or do you actually think every opinion different from yours is big bad Russia trying to brainwash you?
        • I don't think a piece of shit exists to offend my nose. But that hardly means I'll pretend it's something other than a piece of shit.
          • To complete the analogy if you smelled shit on your fingers after wiping your own ass you'd seemingly blame it on Russia.
    • I'm sure Intel has some great software people too, but it's a cost center generally. At Google they're a profit center.

      And Google has been focusing on AI for a LONG time now. They invented the "transformer" structure. If I had to bet on anyone non-NVIDIA to come up with an effective ML API it'd be Google.

      Though I will admit to having high hopes that AMD would eventually figure something out, so I'm 0/1 there. And Intel has been working REALLY hard to get into the GPU and accelerator markets.

      Either way,

      • by Calibax ( 151875 )

        Over the last four years we have interviewed half a dozen Intel software people for a job in my group. Not one has lasted to the last interview of the day (that's me as hiring manager.) Every one of the candidates has been rejected early because lack of knowledge/understanding of their professed area of expertise, generally UEFI or low level drivers/Linux internals. Post interview comment forms completed by interviewers have generally been withering.

        It's possible that only the less competent software engine

  • by rsilvergun ( 571051 ) on Monday March 25, 2024 @01:09PM (#64343483)
    seriously, I get that CUDA is a tough nut to crack, but it's a $2 trillion dollar market.

    AMD should be getting flooded with capital right now. But I suspect that the investors aren't all that interested since they already own Nvidia stock, and why would you compete with yourself?
    • AMD has been an also ran for decades for good reason: their software isn't as good.

      When I was poor I bought AMD because it was cheap and I had high tolerance for buggy shit due to thin wallet.

      Once I could afford Nvidia the experience was night n day. Their shit just works.

      When AMD first learns how to write solid software and matching solid hardware they'll stand a chance and not a moment sooner.

      There is no conspiracy among the 1%. Nvidia is just better.

      • NVIDIA has a ton of simulation behind their parts. This lets them build just the right combination of stuff to power the games they know about.

        As I heard it, AMD was a more brute force approach. Throw a lot of powerful hardware at the problem and hope it helps. Which has its costs... less profit margin. You're selling more chip area for equal or less money (depending on how you guessed).

        Which was nice for the crypto miners. They ended up benefiting from those AMD chips for a while. I stopped chasing

        • by Saffaya ( 702234 )

          Stop sucking nVidia's cock. It's not buggy games giving a black screen, it's your Samsung monitor that can't fking respect the specs.
          FreeSync is a VESA standard.
          My Samsung monitor can't even wake up from power saver mode reliably, no FreeSync involved.

        • But also AMD software and the general environment sucks massive donkey balls.

          Compare. Let's say I'm a n00b starting out in ML/AI, and not already a software engineer. Say a student, or someone working in a discipline which isn't software in general.

          Here's the process for NVidia:
          1. Get a card. Literally any card will get you started, even a 1050.
          2. pip install torch
          3. Call .cuda() on your tensors/modules and go to town.

          What's the process for AMD?

          And the final benefit is if you do get it up and running, buck

      • NVidia had more staff dedicated to AI than all of AMD at a time when it appears AMD's focus was on CPU supremacy and beating their old rival Intel. They did succeed where they were focusing in many ways. Maybe now with a shifting focus to AI we can get some real competition. My wallet would really like to see a price war.
        • I think AMD did a good job at the same time Intel stalled out. The i14 has put Intel on top again for most loads for people who are not price sensitive which is by definition the top tiers of the market. At the same time, AMD has never been a top tier competitor to Nvidia. They compete on "good enough and a great price" in the middle and lower ends of the GPU market. The 7900xtx is a very high performing part but was never the top, is pretty thoroughly stomped by the 4090 and in most loads loses to the

          • Fair enough AMD has never been top tier competitor. ATI was before AMD bought them but had lost the battle by the time of the sale. I remember long ago having to pick between ATI and NVidia when they were at similar price points and ATI was the superior bitcoin mining card. That's around the time a friend pointed out the staffing disparity and that CUDA would be the machine learning winner in the future. He was right and not long after I was ditching my ATI cards for NVidia on the next upgrade cycle.
            • ATI... good times. Last week I was going through old cartons and tossed some ancient ATI and Matrox cards, too, along with some other junk not compatible with modern computing. There's no need for 200 watt power supplies now. Was all good stuff at the time but I needed space for my wrench set.

    • seriously, I get that CUDA is a tough nut to crack, but it's a $2 trillion dollar market.

      AMD should be getting flooded with capital right now. But I suspect that the investors aren't all that interested since they already own Nvidia stock, and why would you compete with yourself?

      AMD has been bussy changing their "CUDA Killer" architecture (and the Accelerated Graphics one too) every few years. Close To Metal, Mantle, Stream, GPUOpen, HIP, ROCm... Does any one of those ring a bell?

      AMD has been bussy having knee-jerk reactions, developing new CUDA-Killer software frameworks, without reason or Ryme, and then discarding them, without stoping to take a long hard look and reflecting at what need to be done, and then doing it.

      Hope intel has better luck with their unifyinf FOSS effort, an

      • NVidia's been hiring all the engineers. I gather it's a problem for NVidia because they've been paying them so much a lot of 'em are millionaires and not interested in 90 hour work weeks anymore, but either way AMD couldn't compete.

        But like I said, AMD should be getting a *ton* of capital right now and they're just not.
    • They don't even need to crack CUDA. A tiny minority of AI developers are writing CUDA kernels, and getting fewer as ML is stabilizing a bit (new layer types needing custom CUDA are somewhat rare now). This is double the case now PyTorch does a pretty good job of that for you in a bunch of cases.

      If AMD just made their shit work well with the top few ML frameworks, i.e. PyTorch and TensorFlow, maybe JAX if they're feeling hipster, that would be good.

      Except, OMFG check out their support compared to NVidia. For

    • OpenML or whatever AMD's answer to CUDA is just doesn't have the market penetration and the history for people to invest.

      It took me at least twice as much work and reengineering of various AI libraries and their dependencies to get them to work on my AMD GPU. There's not a day I work on it which I don't consider just getting a high-end NVidia card and telling AMD to F-Off.

      But AMD is so much cheaper it still makes sense at some level.
    • by ndykman ( 659315 )

      CUDA is a very thin abstraction over NVIDIAs hardware. So, getting CUDA ported over to any other architecture is possible, but it won't be as efficient. And, CUDA isn't open source, so there's no way any tweaks or fixed that AMD (or Intel) would need to work better with their GPUs would be accepted.

      There's just a lot of inertia around CUDA. It's been used for over 10 years and it just worked. It is only until now that there is such a demand for this kind of hardware outside supercomputing that the issue of

  • by williamyf ( 227051 ) on Monday March 25, 2024 @01:28PM (#64343545)

    Just for example, Think about the shitshow that is OpenCL v3. OpenCL was SUPER-Slow but at least overall rationaly developed until OpenCL 2.2.. and then it went apeshit with OpenCL 3.0, there, 1.2 is mandatory (a regression), and ABSOLUTELY EVERTHING beyond that is optional.

    Who could develop interoperable SW on top of OpenCl like that?!

    Similar crap with other efforts like SYCL...

    So, history does not repeat itself, but it surely rymes, this effort wil fumble too

    And the worst part is that I wish there was a viable alternative to CUDA...

    • I'd still had "OpenACC" stuck in my head for the name, but yeah OpenCL seems to be what everyone was writing to for non-NVIDIA cards. And yeah, it wasn't quite as polished since it had to support "everyone"... But so will this, right?

      I guess it can be focused on what ML, and whatever "AI" turns into next, needs. Which is more about lots of cards than squeezing every ounce of perf out of 1 card. Or putting up graphics, versus making lots of low accuracy calculations that all build somehow.

  • Oooo, the PLOT to BREAK someone's GRIP! Sounds excitingly violent!

    For an announcement that some companies are collaborating on a new toolkit or API.

    A bit too sensationalistic for my taste.

  • by ceoyoyo ( 59147 ) on Monday March 25, 2024 @02:37PM (#64343797)

    It'll totally work this time. Not like last time the industry formed a group to make an open API for heterogenous computing.

  • AMD is not even mentioned in an article about challenging NVidia? AMD can run CUDA apps using ZLUDA today, but somehow it's news to talk about plans to address NVidia some day with Intel technology? Companies are directly supporting AMD now with their new MI300X chips and software capabilites such as Microsoft, Meta and OpenAI. Seems to me this rocket is launched and that's where the game is. It's not clear to me who cares about committees.
    • Re:Where is AMD? (Score:4, Informative)

      by Calibax ( 151875 ) on Monday March 25, 2024 @06:06PM (#64344503)

      The problem is that ZLUDA merely makes AMD hardware appear to CUDA as Nvidia hardware. CUDA is only licensed to run with Nvidia hardware, and any company that tries to ship ZLUDA or a similar product will get sued into the ground. That's why nobody will touch it commercially. I guess some low level manager at AMD funded it because he didn't understand and AMD Legal shut it down as soon as they found out.

      Check out what happened to Psystar Corporation when they tried an end-around run on their Mac clones in 2008/2009 - they created Rebel EFI to make their products look to Mac OS X like Apple hardware, and quickly ended up bankrupt.

  • by guardiangod ( 880192 ) on Monday March 25, 2024 @03:22PM (#64343955)

    >Qualcomm

    Treats its SoC customers like they are out to steal Q's trade secret and sets extraordinary high licensing fee.

    >Google

    Mandatory abandonment of its products and API after the 5 years mark.

    >Intel

    An electronic company with a terrible history of writing software/APIs.

    Good luck

  • The real problem (Score:5, Interesting)

    by Photo_Nut ( 676334 ) on Monday March 25, 2024 @03:33PM (#64343995)

    Back in 2010, NVIDIA supported OpenCL -- Apple's new API to unify compute architectures. At the same time, NVIDIA started its CUDA initiative.
    Soon, the value of OpenCL - bridging the gap between CPU and GPU for one piece of code was touted, and NVIDIA decided to kill all their efforts there.
    Instead, they focused on buying up anyone and everyone making GPU accelerated anything, and making those libraries work better on CUDA than on any competing hardware. They bought up people writing compilers and made all their optimizations part of CUDA's compilers. They bought up the people writing and optimizing the best linear algebra solvers and made their libraries accessible to CUDA users.

    So if you're looking at AMD or Intel CPUs or GPUs, the problem isn't a hardware one -- the gaming CPUs and GPUs can technically be competitive in operations per watt (except for the DGX series of computers which are massive and have massive custom interconnects), but the best experts in the field of software doing all the work were acquired by NVIDIA long, long ago. The same is true of AI software. Sure, you can find people at Microsoft ang Google and elsewhere working on this, but the vast majority of the experts get to play with massive GPU hardware on next generation devices in Santa Clara.

    NVIDIA's AI research team gets to do their work on DGX machines. Since NVIDIA distributors sell existing DGX models in the $300K - $400K price per unit, it's a pretty big perk of working for NVIDIA to work on next year's generations early for AI models. I once worked at Google and was able to run datacenter sized problems on sufficiently quota-ed accounts. It was a privilege. However, it's nowhere near as big a privilege as the NVIDIA AI researchers get.

If money can't buy happiness, I guess you'll just have to rent it.

Working...