Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Facebook Technology

Meta's Next Llama AI Models Are Training on a GPU Cluster 'Bigger Than Anything' Else (wired.com) 28

Meta CEO Mark Zuckerberg laid down the newest marker in generative AI training on Wednesday, saying that the next major release of the company's Llama model is being trained on a cluster of GPUs that's "bigger than anything" else that's been reported. From a report: Llama 4 development is well underway, Zuckerberg told investors and analysts on an earnings call, with an initial launch expected early next year. "We're training the Llama 4 models on a cluster that is bigger than 100,000 H100s, or bigger than anything that I've seen reported for what others are doing," Zuckerberg said, referring to the Nvidia chips popular for training AI systems. "I expect that the smaller Llama 4 models will be ready first."

Increasing the scale of AI training with more computing power and data is widely believed to be key to developing significantly more capable AI models. While Meta appears to have the lead now, most of the big players in the field are likely working toward using compute clusters with more than 100,000 advanced chips. In March, Meta and Nvidia shared details about clusters of about 25,000 H100s that were used to develop Llama 3.

This discussion has been archived. No new comments can be posted.

Meta's Next Llama AI Models Are Training on a GPU Cluster 'Bigger Than Anything' Else

Comments Filter:
  • by Anonymous Coward
    I might be growing too old and crotchety, but I just don't see a reason to ever touch AI. As a human being I am perfectly capable of generating verbal bullshit at the speed of typing. Sure, AI is faster than that, but then it doesn't get context and is vastly inferior in delivering underhanded insults. Likewise, if I wanted shitty code fast, I would just copy-paste substack examples.

    Someone please explain to me why all these tech companies setting all this perfectly good cash on fire via AI?
    • by Bumbul ( 7920730 )

      AI is faster than that, but then it doesn't get context and is vastly inferior in delivering underhanded insults.

      Your use case requires decensored models - I'm sure they will match your capabilities in delivering insults.

    • by ceoyoyo ( 59147 )

      Chatbots are AI. AI is not chatbots.

  • Doesn't make a difference how many GPU's you use.
    • but a supercluster does process a lot more garbage than a regular cluster
    • Doesn't make a difference how many GPU's you use.

      This is incorrect. These models are all in the research stage, which requires lots of trial and error. The training times for these huge models is days and weeks. How big your cluster is significantly determines the turnaround times for the trial and error. That's why the hyperscalars are willing to spend tens of billions.

      Open AI just announced their first search product. Google cannot afford to not be first in developing generative AI-based search. It's an existential problem for them.

  • by ZipNada ( 10152669 ) on Thursday October 31, 2024 @12:40PM (#64909497)

    Apparently it can require huge compute resources to train models, but not nearly so much to run the model. Apparently the Meta Llama models are available for free. I don't see how this is economically feasible for Meta but I'm not complaining.

    https://huggingface.co/blog/ll... [huggingface.co]
    "Llama 3.2 Vision comes in two sizes: 11B for efficient deployment and development on consumer-size GPU, and 90B for large-scale applications."

    "These models are designed for on-device use cases, such as prompt rewriting, multilingual knowledge retrieval, summarization tasks, tool usage, and locally running assistants. They outperform many of the available open-access models at these sizes and compete with models that are many times larger."

    I'm tempted to experiment with it. If you are in the EU you are out of luck.

    "any individual domiciled in, or a company with a principal place of business in, the European Union is not being granted the license rights to use multimodal models included in Llama 3.2"

    • NVIDIA's not complaining either. H100's are still ~$40k each. If there are 100k+ H100s, or "bigger than 100k" that will cost bigger than $4,000,000,000.

      • Or they're just upping the already 25k node cluster to 100k+ which is more likely.

      • The Blackwell devices are said to be considerably more cost-effective.

        "build and run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than its predecessor"
        https://nvidianews.nvidia.com/... [nvidia.com]

        If that's the case anyone who spent $billions on H100 will feel a little miffed.

    • by Anonymous Coward

      "Apparently it can require huge compute resources to train models, but not nearly so much to run the model. Apparently the Meta Llama models are available for free. I don't see how this is economically feasible for Meta but I'm not complaining."

      Llama3 8B runs on a mid-range smartphone.

      "I'm tempted to experiment with it."
      Do it, you will learn a few things.

      "If you are in the EU you are out of luck."
      Nope. As normal user you want quantized versions anyway and they are neither country-walled nor do they require

    • by ink ( 4325 ) on Thursday October 31, 2024 @01:40PM (#64909689) Homepage

      "I don't see how this is economically feasible"

      We're in pre-enshitification of AI. Once one or two dominate the technology, they will start "monetizing" it. Also, the more AI shit you can spout on earnings calls, the more horny wall street gets for your sweet sweet stock.

  • by Spinlock_1977 ( 777598 ) <Spinlock_1977.yahoo@com> on Thursday October 31, 2024 @01:24PM (#64909633) Journal

    GPU Farm... today's tech bro bragging right.

  • So? (Score:4, Funny)

    by gweihir ( 88907 ) on Thursday October 31, 2024 @01:39PM (#64909685)

    If trained on crap data, it will still be a crap LLM. Like all of them, because only crap data is available for training and LLMs are pretty crappy even on good data.

  • ... It's how you use it.

    On the plus side, there will be a whole lot of nice lightly used GPU servers for sale in a few years. Mostly these machines have moved away from the PCIe card format that consumer GPU's use, so it will be harder to sell them off individually.

    • Yup, and we'll have a few spare nuclear power plants. These tech bubbles are getting pretty predictable. Let's hope I am wrong.
  • Almost two months ago, xAI announced its supercomputer was online sporting 100K H100s, with another 50K H100 and H200s scheduled to be added.
  • Anonymous [slashdot.org]: “Someone please explain to me why all these tech companies setting all this perfectly good cash on fire via AI?

    It's to do with embrace, extend and extinguish free discourse on the Web. As in the future, all access to information will be funneled through ClippyAI.
    --

    Already I've been reported to the mothership for upsetting ChatGPT.
  • No one else is imagining welcoming a beowulf cluster of AI overlords?

  • by LostMyBeaver ( 1226054 ) on Friday November 01, 2024 @12:33AM (#64910989)
    Massive ingest of trillions of pieces of data is greatly flawed. It will never perform well since inference will be based purely on observation of non-interactive datasets. The training system could never verify its dataset, at least not on scale.
    Now, take a swarm of robots in three sizes, child, pre-teen and young adult. Place them in schools, malls, airports, etc. Link them all to a single common training system. Make them interact with their environments. When a new piece of data that has a large impact on the model is encountered, make a group of the robots fact find and even post on forums asking for explanations.
    I expect any AI trained in a way that allows it to confirm it's studies will perform far better than any AI simply blasted by data.
  • Alpacas are very gentle and shy. Llama is a family of autoregressive large language models that spit on you ! Getting cheeky !

"The most important thing in a man is not what he knows, but what he is." -- Narciso Yepes

Working...