Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology

Nvidia in Talks To Acquire AI Infrastructure Platform Run:ai (calcalistech.com) 6

Israeli outlet Calcalist: Nvidia is in advanced negotiations to acquire AI infrastructure orchestration and management platform Run:ai, Calcalist has learned. The value of the deal is estimated at many hundreds of millions of dollars and could even reach $1 billion. The companies did not respond to Calcalist's request for comment.

Run:ai raised $75 million in a Series C round in March 2022 led by Tiger Global Management and Insight Partners, who also led the previous Series B round. The round included the participation of additional existing investors, TLV Partners, and S Capital VC, bringing the total funding raised to date to $118 million.

This discussion has been archived. No new comments can be posted.

Nvidia in Talks To Acquire AI Infrastructure Platform Run:ai

Comments Filter:
  • Can someone please explain the phrase "AI infrastructure orchestration and management platform" from business gobbledygook into plain English?

    The article says (5-6 paragraphs in) finally gets to the point by saying "virtualization software layer...efficiently pools and shares GPUs by automatically assigning the necessary amount of computing power".

    So it's divying up workloads across shared computing resources? I thought that was a largely solved problem. Oh, it's for AI! Then definitely worth a bi
    • They make using a lot of GPU-based software easier and they have customers on that basis. The latter part is what's important. Nvidia wants those customers to be their customers, so they can cut out the middleman and sell to them directly for the same price, and collect more of the profit.

    • The point is that it *wasn't* a solved problem for machine learning / deep learning workloads in GPUs, which are very different from conventional workloads on CPUs.

      • by jabuzz ( 182671 )

        Fundamentally it would not take much effort to make the likes of slurm work well with GPU loads. The issue is the idiot users who have no history in shared user systems and expect to be able to treat them like their personal workstations.

    • by jabuzz ( 182671 )

      Yes that is a solved problem for traditional compute workloads, that is my day job running an HPC system. For AI it is a freaking nightmare. The first problem is the researchers have no history in shared computing resources, so come bitching they cant install latest wacky framework using sudo I kid you not. They seem to think they can treat a shared multimillion-pound GPU cluster like their personal workstation. Secondly, existing job schedulers like slurm etc. are not great with GPUs. You can make it work

Keep up the good work! But please don't ask me to help.

Working...