Meta's Next Llama AI Models Are Training on a GPU Cluster 'Bigger Than Anything' Else (wired.com) 15
Meta CEO Mark Zuckerberg laid down the newest marker in generative AI training on Wednesday, saying that the next major release of the company's Llama model is being trained on a cluster of GPUs that's "bigger than anything" else that's been reported. From a report: Llama 4 development is well underway, Zuckerberg told investors and analysts on an earnings call, with an initial launch expected early next year. "We're training the Llama 4 models on a cluster that is bigger than 100,000 H100s, or bigger than anything that I've seen reported for what others are doing," Zuckerberg said, referring to the Nvidia chips popular for training AI systems. "I expect that the smaller Llama 4 models will be ready first."
Increasing the scale of AI training with more computing power and data is widely believed to be key to developing significantly more capable AI models. While Meta appears to have the lead now, most of the big players in the field are likely working toward using compute clusters with more than 100,000 advanced chips. In March, Meta and Nvidia shared details about clusters of about 25,000 H100s that were used to develop Llama 3.
Increasing the scale of AI training with more computing power and data is widely believed to be key to developing significantly more capable AI models. While Meta appears to have the lead now, most of the big players in the field are likely working toward using compute clusters with more than 100,000 advanced chips. In March, Meta and Nvidia shared details about clusters of about 25,000 H100s that were used to develop Llama 3.
Get off my lawn with this AI shit (Score:1)
Someone please explain to me why all these tech companies setting all this perfectly good cash on fire via AI?
Garbage in, garbage out (Score:2)
Re: (Score:3)
Re: (Score:3)
training models versus running them (Score:3)
Apparently it can require huge compute resources to train models, but not nearly so much to run the model. Apparently the Meta Llama models are available for free. I don't see how this is economically feasible for Meta but I'm not complaining.
https://huggingface.co/blog/ll... [huggingface.co]
"Llama 3.2 Vision comes in two sizes: 11B for efficient deployment and development on consumer-size GPU, and 90B for large-scale applications."
"These models are designed for on-device use cases, such as prompt rewriting, multilingual knowledge retrieval, summarization tasks, tool usage, and locally running assistants. They outperform many of the available open-access models at these sizes and compete with models that are many times larger."
I'm tempted to experiment with it. If you are in the EU you are out of luck.
"any individual domiciled in, or a company with a principal place of business in, the European Union is not being granted the license rights to use multimodal models included in Llama 3.2"
Re: (Score:2)
NVIDIA's not complaining either. H100's are still ~$40k each. If there are 100k+ H100s, or "bigger than 100k" that will cost bigger than $4,000,000,000.
Re: (Score:2)
Or they're just upping the already 25k node cluster to 100k+ which is more likely.
Re: (Score:3)
"I don't see how this is economically feasible"
We're in pre-enshitification of AI. Once one or two dominate the technology, they will start "monetizing" it. Also, the more AI shit you can spout on earnings calls, the more horny wall street gets for your sweet sweet stock.
How Big Is Your ... (Score:4, Insightful)
GPU Farm... today's tech bro bragging right.
So? (Score:2)
If trained on crap data, it will still be a crap LLM. Like all of them, because only crap data is available for training and LLMs are pretty crappy even on good data.