Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Supercomputing AI Hardware

Inflection AI Develops Supercomputer Equipped With 22,000 Nvidia H100 AI GPUs 28

Inflection AI, an AI startup company, has built a cutting-edge supercomputer equipped with 22,000 NVIDIA H100 GPUs. Wccftech reports: For those unfamiliar with Inflection AI, it is a business that aims at creating "personal AI for everyone." The company is widely known for its recently introduced Inflection-1 AI model, which powers the Pi chatbot. Although the AI model hasn't yet reached the level of ChatGPT or Google's LaMDA models, reports suggest that Inflection-1 performs well on "common sense" tasks, making it much more suitable for applications such as personal assistance.
>
Coming back, Inflection announced that it is building one of the world's largest AI-based supercomputers, and it looks like we finally have a glimpse of what it would be. It is reported that the Inflection supercomputer is equipped with 22,000 H100 GPUs, and based on analysis, it would contain almost 700 four-node racks of Intel Xeon CPUs. The supercomputer will utilize an astounding 31 Mega-Watts of power.

The surprising fact about the supercomputer is the acquisition of 22,000 NVIDIA H100 GPUs. We all are well aware that, in recent times, it has been challenging to acquire even a single unit of the H100s since they are in immense demand, and NVIDIA cannot cope with the influx of orders. In the case of Inflection AI, NVIDIA is considering being an investor in the company, which is why in their case, it is easier to get their hands on such a massive number of GPUs.
This discussion has been archived. No new comments can be posted.

Inflection AI Develops Supercomputer Equipped With 22,000 Nvidia H100 AI GPUs

Comments Filter:
  • Aass! (Score:5, Funny)

    by thegarbz ( 1787294 ) on Friday July 07, 2023 @05:56AM (#63664736)

    Now you too can subscribe to our Ai-as-a-Service. Sorry if it is a bit shitty, butt it's literally in the name.

  • It reminds me of a smart watch. Bling that isn't even blingy. They just seem so dorky.

    • Someone had to come along and bridge the price gap between a casio G-shock and a Rolex :-)
    • by Junta ( 36770 )

      Yeah, I'm kind of shocked they have the resources to get that much gear. We are talking about a pretty trivial chatbot in a crowded field of platforms doing both chatbot and using LLM for more stuff.

      I suppose investors may be imagining them to grow up to do bigger and better things, but it's an awfully crowded field with competitors well ahead of them.

    • by Excelcia ( 906188 )

      I originally read the headline as "Infection AI". I'm not sure it isn't more correct that way, though.

    • My watch nags me whenever I don't do at least 250 steps every hour. It's smart enough to say, "Get off your ass, you lazy slob!"
      • My watch nags me whenever I don't do at least 250 steps every hour. It's smart enough to say, "Get off your ass, you lazy slob!"

        My watch congratulated me for completing my exercise for the day about 20 minutes after driving on the freeway. As if I am a cheetah who completed a 10K....

        • Pro tip: wear your Fitbit on the hand that you use to masturbate, you'll be AMAZED how many "steps" you put in!
    • Thereâ(TM)s nothing gimmicky about this. You can see in the previous generation of AI breakthrough (image processing) that thereâ(TM)s huge numbers of practical applications for this kind of tech. That generation got us things like huge improvements in the quality of photography; massive leaps in various scientific fields, like astronomy and particle physics; big improvements in medical imagingâ¦

      LLMs have already got us things like much better automated translation, tutoring systems fo

  • Their text generator is even more infuriating than chatgpt. It's good they use so much money, natural resources and energy to do this. LLM are revolutionary.
  • How many concurrent things is this? Without knowing this, this is just bragging. Oh, look at us, we're NVIDIA, we're so good we can't make enough shit. Everyone wants our shit so we're prioritising customers who least need it but did you see their numbers?
  • Forgive my ignorance ,but is that 31 Mega-Watts per day ? hour? Query?
    • Is what 31 mega watts?
    • by Saffaya ( 702234 )

      Your kitchen oven may have a power usage of 2,000 W. Means it uses 2,000 joules every second.
      That AI center has a power usage of 31,000,000 W, hence uses 15,500 times more energy per second than your oven.
      Clear enough?

    • by Junta ( 36770 )

      Watts are a unit of power, thus it's not per day or per hour, it just is. Whatt-hours are a measure of energy, thus hypothetically this datacenter runnig flat out would do 31 MWh per hour.

    • It's 31 megawatts any time it is in full use. Usually it is calculated by Watt hours.

      Furthermore, it's generalized that a power plant making 1 megawatt can serve 1,000 homes - because some will be using more power, and other less, and the total power usage would be under 1 megawatt every hour.

      That means when going full tilt this super computer uses the same amount of energy as a small to medium sized city of 31,000 homes. Every hour. Do note, that is 31K HOMES, not people. The average is multiple people per

  • ... oooh, infLection. Whew. Thought they were going with a too on-the-nose name. Like Skynet Murderallhumans AI.
  • I'm sick and tired of being marketed to on every single thing I own. My TV, even when I'm watching streaming media, my phone, my car*, my refrigerator*, washer and dryer*, hell, I'm surprised my lamps and/or light bulbs don't shit out advertisements to me at seemingly random times of the day. I want a personal AI instance to run on my own server(s) that will not log, evaluate and sell my information to third party advertisers to try to market more shit to me that I have no desire for, and have to ignore

  • Reminds me of supercomputer budgets in the 2000s. What did they run? Besides insane electric bills, just dick measuring benchmarks mostly. It would be interesting to know how they came to require that size of system and if they have done anything remotely big.

  • The GPUs alone cost $880M. Having raised $1.5B, it will be interesting to see how long they'll stay in the red before operational costs threaten to defund them. I would love a flood of second-hand H100s to enter the market!

  • This headline just reminds me of ENIAC's proud boast of using 18,000 vacuum tubes.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...