Anthropic Reveals $30 Billion Run Rate, Plans To Use 3.5GW of New Google AI Chips (theregister.com) 36
Anthropic says its annualized revenue run rate has surpassed $30 billion and disclosed plans to secure roughly 3.5 gigawatts of next-generation Google TPU compute starting in 2027. Broadcom will supply the key chips and networking gear for the effort, the company announced. The Register reports: News of the two deals emerged today in a Broadcom regulatory filing that opens with two items of news. One is a "Long Term Agreement for Broadcom to develop and supply custom Tensor Processing Units ("TPUs") for Google's future generations of TPUs." Google and Broadcom have collaborated to produce custom TPUs. Broadcom CEO Hock Tan recently shared his opinion that hyperscalers don't have the skill to create custom accelerators and predicted Broadcom's chip business will therefore win over $100 billion of revenue from AI chips in 2027 alone.
Working on next-gen TPUs for Google will presumably help to make that prediction a reality. So will the second part of Broadcom's announcement: a "Supply Assurance Agreement for Broadcom to supply networking and other components to be used in Google's next-generation AI racks through up to 2031." Broadcom's filing also revealed one user of Google's next-gen TPU will be Anthropic, which starting in 2027, "will access through Broadcom approximately 3.5 gigawatts as part of the multiple gigawatts of next generation TPU-based AI compute capacity committed by Anthropic."
Working on next-gen TPUs for Google will presumably help to make that prediction a reality. So will the second part of Broadcom's announcement: a "Supply Assurance Agreement for Broadcom to supply networking and other components to be used in Google's next-generation AI racks through up to 2031." Broadcom's filing also revealed one user of Google's next-gen TPU will be Anthropic, which starting in 2027, "will access through Broadcom approximately 3.5 gigawatts as part of the multiple gigawatts of next generation TPU-based AI compute capacity committed by Anthropic."
3.5 Gigawatts? (Score:2)
So, by Doc Brown's units of power, that would be just shy of three lightning strikes, but continuous. Great Scott!
Re: (Score:2)
Re: (Score:2)
Can somebody please travel back in time and make sure Dario Amodei's parents never meet? Thanks, I'd appreciate it.
Motion seconded. But, you know, while on the mission, Sam Altman's parents, Mark Zuckerberg's, Jeff Bezos', you know what, we'll compile a list and get it to you before mission launch.
Re: (Score:2)
This means you shoud NOT, under any circumstance, run Claude at 88mph. Unless you really want to.
Re: (Score:2)
It's honestly pretty interesting that we've gone from units of computational performance to units of electricity in measuring AI compute.
Essentially, this tells us that bottleneck has moved from silicon's computational performance to being able to provide said silicon with enough power to perform said computational tasks.
Wait... (Score:5, Funny)
Re: (Score:2)
And the really annoying thing about this is that when the AI bubble inevitably crashes, it's going to be difficult to repurpose all of these specialized AI processors into something useful..
This won't be like the 2018-2023 crypto bubble, where we end up with a ton of cheap used GPU's and power supplies available for resale. This stuff with mostly end up in the landfill and scavenged for their raw materials.
Re: (Score:1)
Re: (Score:2)
Freeman Dyson entered the chat (Score:2)
Wasn't Freeman Dyson at least skeptical of ever bigger and better particle accelerators as reaching diminishing returns on the amount of physics knowledge gleened per dollar spent.
Forget Freeman Dyson. Hasn't anyone taken ECON 101 as a college freshman and remember "diminishing marginal returns." And forget the harm to the environment, isn't the outragious electric bill a sign of more and more resources thrown at something to "scale it up" without considering where the scaling law levels off?
Re: (Score:2)
And the really annoying thing about this is that when the AI bubble inevitably crashes, it's going to be difficult to repurpose all of these specialized AI processors into something useful..
This won't be like the 2018-2023 crypto bubble, where we end up with a ton of cheap used GPU's and power supplies available for resale. This stuff with mostly end up in the landfill and scavenged for their raw materials.
"Inevitably crashes"? And how exactly do you think is THAT going to happen? All those people using claude are all of a sudden going to abandon it? At worst some stock market bubbles will burst, but your fantasy of everyone of their users going "oh geee, I've been using claude for 6 months but I just read this random guy on slashdot saying it can't count 'r' in 'strawberry', and now I see the light and I'm dropping it THIS INSTANT" is not going to happen. And even speaking of the stock market, I doubt you're actually putting your money where your mouth is and are shorting Anthropic, are you? No of course not, bet you have some excuse about "market staying irrational longer than you can stay solvent" or something.
The truth lies somewhere between "all AI will crash out demand" and "there's a need for all these new datacenters being built and the demand will continue to outpace build-out." I'd say there are approximately zero chances that all the datacenters we're currently contemplating building out will remain useful once the overall AI market begins the course correction that's bound to happen when some of the more nebulous fantasyland nonsense doesn't come to pass quickly enough to serve the business sectors that
Enstuffification of AI? (Score:2)
What is the revenue model? Selling what you disclose to the AI?
Or will anything beyond the most brain-dead AI be a big monthly subscription?
Will your employer insist that you not use their paid-for AI for personal use in the way of Cyber Monday that you weren't supposed to use your work Internet to purchase your Christmas presents but people did this anyway?
Or will AI gradually become useless owing to who pays the most coin to train the neural networks a certain way, becoming useless like Web search
Re: (Score:2)
No, it's basically impossible. AI doesn't need high precision math - 16-bit floating point, 8 bit floating point, even 4 bit floating point is all that's required.
There aren't many uses for such low precision ALUs and such other than AI.
Hold up, friend. (Score:2)
That's the most disgusting dick size measuring contest ever!
Well... if you've seen a dick size measuring contest and said, "those are really nice dicks" then you have likely just found out something new about yourself. ;)
Re: (Score:2)
Re: (Score:2)
no matter what you vote, you get dick size measuring contest!
Re: (Score:2)
We're measuring CPUs in gigawatts, not megabytes or operations per second now? Dudes, the goal isn't to waste as much energy as possible! That's the most disgusting dick size measuring contest ever!
You clearly didn't get the new dick measuring spec sheets. We clearly ARE trying to waste as much energy as possible. Along with all other resources available. That's what AI is. An outward manifestation of the greed we have worshipped for forty years or more in the United States. Even the framing of it is based on greed. "We have to, or someone else might." It's as tribal and greedy as anything we've ever done. Gotta climb aboard, or you'll get run over by it. Or so we keep getting told.
Re: Wait... (Score:2)
Yes, when it comes to AI, thatâ(TM)s how we measure it, because thatâ(TM)s the important metric. They have a certain amount of power that they are able to consume. Theyâ(TM)re buying from Google âoehowever many CPUs maximise our compute give a power budget of 3.5GW.â
Re: (Score:1)
Hehe... the only thing that measurement defines is how much power it needs to run. What does that 3.5 jiggawatts help with? More AI slop? Writing your kid's essay for them? Goody!
Maybe, I dunno... dedicate the excess compute power to cancer research (because not every server is busy every compute cycle across every Anthropic location... do something like idle machines kick over to something like BOINC for 5 minutes or something).
If an Antminer can draw 1400W for a thing that'll fit in a shoebox, I could
Billionares Using Our Resources to Replace People (Score:3)
Re: (Score:1)
Re: (Score:2)
I've designed a few machines - some rather more insane than others - in meticulous detail using AI. What I have not done, so far, is get an engineer to review the designs to see if any of them can be turned into something that would be usable. My suspicion is that a few might be made workable, but that has to be verified.
Having said that, producing the design probably took a significant amount of compute power and a significant amount of water. If I'd fermented that same quantity of water and provided wine
Re: (Score:2)
welp, i disagree. i find it quite useful for now, using it sensibly and sparingly.
but the whole circus and freakshow around it is also about to make a huge bunch of rich clueless motherfuckers lose huuuuge piles of money. think of the spectacle and have some romanticism, life is short, ffs!
other than that, yeah, like any other tool it's going to be abused, and not in people's favor, but that's sort of an inevitability, the history of our species ...
"Revenue run rate" ? (Score:3)
"AI Overview
Revenue Run Rate
(RRR) is a financial metric that projects a company’s future annual revenue based on current, short-term performance (usually monthly or quarterly). It helps startups and rapidly growing companies estimate annual revenue by assuming current performance trends will remain consistent for a full year."
Re: (Score:2)
Probably. I mean they all are keen on obscuring how very far too low their revenue is to even break even.
Re: (Score:2)
they're announcing they're about to win a few bucks and thus it's reasonable to invest yet another whole lot of bucks in more datacenters. it's vibe entrepeneuring.
PR much? (Score:2)
Claude rules (Score:3)
The Claude models are the best by far for coding assistance in my experience. Apparently a lot of other people think so too because Anthropic is getting swamped. They are having to ration out their compute resources and in some cases have raised their fees to 2-3 times more than the lesser competition charges. I'm finding that in order to keep costs down I'm having to use 2nd-tier models for simpler work and revert to Claude for the heavy lifting. A hassle.
Clearly the demand is there. At this point I expect Anthropic is revenue-limited by their infrastructure availability so it makes sense that they recruit the big players to help beef it up.
Re: (Score:2)
It appears that three AI companies are neck and neck: GitHub, Claude, and Cursor. https://www.cbinsights.com/res... [cbinsights.com]
My experience is a bit different from yours. I personally use GitHub Copilot, which lets you use models from all the major companies, including Claude. Whenever I've tried Claude models, I get good results, but the execution is *S*L*O*W*. Like, 3-5 times slower than, say, GPT-5.4. So I keep reverting back (for now) to GPT.
Re: (Score:2)
Claude can be slow I agree. But I submit that is because so many people are demanding to use it.
I work with other models too and get good results at times but if it is something particularly difficult the Anthropic models tend to do a better job for me.
$1000/mo (Score:2)
Why should anybody care if this drives electric bills up to $1000/mo for the typical household?
We have unlimited energy, no?
Dipshits aren't creating a global energy crisis right now.
The world economy isn't headed for a global depression.
Natgas should be burned for LLM hallucinations and cats driving motorcycles, not converted into fertilizer to stave off a massive African famine.
Western woke governments haven't spent the past fifty years blocking new energy generation at every opportunity.
Right?
Don't invit
Revenue Run Rate? (Score:2)
As in "Take the money and run"?