Anthropic Raises $30 Billion at $380 Billion Valuation, Eyes IPO This Year (reuters.com) 33
Anthropic has raised $30 billion in a Series G funding round that values the Claude maker at $380 billion as the company prepares for an initial public offering that could come as early as this year. Investors in the new round include Singapore sovereign fund GIC, Coatue, D.E. Shaw Ventures, ICONIQ, MGX, Sequoia Capital, Founders Fund, Greenoaks and Temasek. Anthropic raised its funding target by $10 billion during the process after the round was several times subscribed.
The San Francisco-based company, founded in 2021 by former OpenAI researchers, now has a $14 billion revenue run rate, about 80% of which comes from enterprise customers. It claims more than 500 customers spending over $1 million a year on its workplace tools. The round includes a portion of the $15 billion commitment from Microsoft and Nvidia announced late last year.
The San Francisco-based company, founded in 2021 by former OpenAI researchers, now has a $14 billion revenue run rate, about 80% of which comes from enterprise customers. It claims more than 500 customers spending over $1 million a year on its workplace tools. The round includes a portion of the $15 billion commitment from Microsoft and Nvidia announced late last year.
And so it begins (Score:4, Insightful)
We're now in the race to IPO phase of this shitty dotcom era remake, which means we're also in the "but they didn't make it to IPO" phase of that. It's going to be a wild ride.
Re: And so it begins (Score:2)
Re: And so it begins (Score:4, Insightful)
It's something, and I hope it makes a difference, although I don't know if it will. NVIDIA is still sky high in my opinion and they're already public, as is some other "AI" companies like Meta and Alphabet. But given the sheer amount of spend going on by these companies in the market and how they're affecting the overall economy, it'd be better for all of us to have some more insight into these companies' operations that is required for a public company, even if it ends up being a painful insight.
Re: And so it begins (Score:2)
While I hope we don't get a market-wide crash, making investors hold on to stock longer doesn't change the fundamentals of an industry.
Re: (Score:1, Troll)
I have to ask something, because it's relevant.
When you say it's a "shitty dotcom era remake", what level of informed statement is that? How recently have you used the frontier models from OpenAI or Anthropic? Are you genuinely informed here, or making an assessment off using Cursor for a couple hours 4 months ago? Because that's not even close to representative of where things are today.
Just this week and last, I've used the frontier models to fully audit an existing code base and make both architectural
Re: And so it begins (Score:2)
Re: And so it begins (Score:5, Interesting)
There's a billion different ways to do it and I don't think there's a right way exactly, but defining scope for the models helps a lot.
A lot of planning before any implementation, with precise criteria for how and where things get implemented, and each plan gets further broken down in that regard. Have a good architectural picture of how things are supposed to look. MVC isolation helps a lot. Know the data model and how your controller does its thing.
I can't emphasize enough how much planning seems to impact outcomes. You'll overlook things, but if I'm reasonably thorough it usually goes through without a hitch. Plan, ask for plan diagnosis, refinement... I spend 40-60 minutes planning for any substantial anything, and often have time between agent runs while I'm formulating the next plan.
Also a good primary prompt, and CLAUDE.md files with well defined project definitions, structure, and so on. Be explicit. Have the agent track its work and compare/follow up and review the plan for completion afterwards. Heck, you can even have it build/run/test (which I might do before getting lunch) and iterate. When you find it mess something up, update the project details to do/do not do. The one-shot depends on these things.
Your tooling matters too. opencode and cc seem to work best for me, I'd avoid tools like cursor at this point. memvid is a game changer, it will learn your project's structure and hallucinate a lot less.
Re: And so it begins (Score:2)
Re: (Score:2)
Spend less time coding. Coding is for machines. You need to stop thinking like a coder and start thinking like a systems architect and performance engineer.
Re: (Score:2)
>> a good primary prompt, and CLAUDE.md files with well defined project definitions, structure, and so on.
It sure does help to have a good idea as to what it is that you want to do and be able to articulate it. The LLM's will give a lot of suggestions if you ask, and many times they are things you haven't thought about. Best to spend the time to get a good implementation plan together with it before you let it do any work.
Re: (Score:1)
Anyone who thinks AI is a bubble is vastly underestimating how far reaching this is. ChatGPT is a just toy compared to what is coming from AI.
"A personal note for non-tech friends and family on what AI is starting to change."
https://shumer.dev/something-b... [shumer.dev]
This is worth reading.
Re:And so it begins (Score:5, Informative)
A "personal" note from "I'm Matt Shumer — founder & CEO at OthersideAI (HyperWrite). I build and invest in AI products, ship fast, and share what I learn with a large audience."
No conflict of interest, none whatsoever. No reason to skew the truth.
Re:And so it begins (Score:5, Informative)
The author is a CEO of an AI company and an investor in others. I don't mean to poo-poo the actual legitimate advances being made in AI, but this isn't an unbiased piece. He's overstating the capabilities of LLMs in my opinion, but someone who has vested financial interests in the industry is of course going to say those things. The current AI industry is based primarily on the commercialization of LLMs.
I use the latest coding models in my own work and while they do some really awesome things, they are far from "its replacing software engineers". I find they struggle on large code bases and sometimes hallucinate. For me that's okay, because as a SWE I can recognize good code from bullshit and tweak its output but it gets frustrating when it wrongly answers a question about your code (how does X interact with Y, where is the code for that) and it puts you on a wild goose chase around your code base.
LLMs are hitting a brick wall in scaling to the point where the very best models require an incredible amount of resources to run ( = expensive). The resource consumption of LLMs are increasing at a rate greater than what advances in compute power and memory capcity are increasing at.
I'm with Yann LeCun on LLMs: https://www.youtube.com/watch?... [youtube.com]
Okay (Score:1)
I keep hearing this, and my experience (from within the last month) is like talking to a dude who's been at the company for decades and knows a lot about everything, but also has a drug problem and doesn't confirm what he is saying before he blurts it out. I think a lot of people are using robust frameworks but when you are working with brittle languages and specifications things don't go so well. For example in C, integer overflow is undefined. You might say, so what, Intel and ARM chips have had sane beha
Re: (Score:2)
Are you serious? There've been all kinds of discoveries directly attributed to the use of AI:
New mathematical conjectures [wikipedia.org]
Protein folding [wired.com]
New math theorems [sky.com]
Cracked Erds theory [ft.com]
New discoveries in fluid dynamics [businessinsider.com]
This is just scratching the surface. Researchers have been using similar more primitive models for years already, and things are starting to get spicy.
Re:And so it begins (Score:5, Insightful)
Programmer friends at Google, Meta and Amazon have certainly convinced me that code is being assisted successfully by AI. However, the author's level of extrapolation to other fields and situations destroys any credibility he had.
For example, the author - Matt Shumer, who is an AI company founder, booster and frequent submitter to other AI-hype websites, but apparently is not legally trained - spends many paragraphs and anecdotes talking about how a partner at a law firm now has to use AI because he "knows what's at stake" and that AI can do legal work better than their associates.
Nope, the reason that partner is doing it is because he's scared of being left behind, which is the entire motive behind hype pieces like this. I'd wager that hypothetical partner is not the one who beats out all his colleagues and becomes "the most valuable person in the firm" but rather the one that gets sanctioned for submitting briefs with hallucinated cases (which is still happening in the wild regularly). As a lawyer, I can say even current flagship AI models cannot solve the problem of lawyer bar-required ethical duties which require effectively re-doing the work AI does so we can attest it is correct, taking more time than doing the work ourselves the first time.
Shumer similarly gives an "oh god, it's getting so good so fast!" timeline that includes AI passing the bar. That 2023 story was debunked in 2024 and somehow this guy is unaware of that. Why in the world would someone so unable to identify reliable information be trusted on AI reliability?
There may be some functional AI work - like coding within specific environments and circumstances - but there is a huge AI bubble built on this silly "it will do everything better" hype.
Re: (Score:2)
Shumer similarly gives an "oh god, it's getting so good so fast!" timeline that includes AI passing the bar. That 2023 story was debunked in 2024 and somehow this guy is unaware of that. Why in the world would someone so unable to identify reliable information be trusted on AI reliability?
Why do you assume he's unaware and not ignoring it because it's better for his finances? I do applaud your character, it takes a thief to catch a thief and you just clearly displayed that you assume the better. Thank you for giving some hope in this doom and gloom era.
Re: (Score:2)
I've read it and he's at least over the target, if not on the mark.
Re: (Score:1)
Well good for you (Score:2, Redundant)
If it dropped your cloud spend by 90% you were doing something dumb in the first place, so I don't think you're a credible expert in whatever it is you claim to be doing.
Re: (Score:2)
You're not wrong: prototyping using prior LLMs that did things poorly with regards to database schema optimization and calls, primarily. Three months of work that'd have taken years of solo development, before AI. Implement then optimize.
But I could've done it the old school way and toiled over it by hand. I'd still be back in December, thinking about how to implement the abstraction layers...
I would be extremely nervous about any AI company (Score:1)
That means until winners and losers get picked there are definitely going to be some
IPO already? (Score:4, Insightful)
IPO already? Sounds like they're running out of smart money and moving onto the dumb.
Re: (Score:2)
Unless they've managed to improve things internally much faster than it's perceived. They're moving extremely fast.
There's a lot of talk that the Opus 4.6 is really just Sonnet 5 (and behavioral characteristics lend no small amount of credibility to this claim) with much higher margins + intentional token slowdown, which could in theory mean they're now making money per token instead of losing it.
Re: IPO already? (Score:2)
That reply just sounds like make up words.
"well snickers 3.5 may as well be payday 5 but snarf bots are making token money model inference costs FEED ME SEYMOUR!"
Re: (Score:2, Insightful)
No, they're expecting the free money to dry up, and the company's value to plummet. If they cash out just before that happens, they maximize their own bank accounts.
This is basically an admission that the bubble is about to burst.
Recent good competition to Claude for coding (Score:5, Informative)
Last week I praised Claude code, especially the cli here on slashdot. I still think it's currently the best but it's also the most expensive. But the competition is getting a lot better (and cheaper). I've been using the opencode cli lately with several models: Kimi K2.5, Kimi K2.5 Thinking, OpenCode's Big Pickle, and Qwen3 Coder Next. Using them through OpenCode's Zen service and also OpenRouter (for Qwen3 Coder Next), they are all about 1/2 the cost of Claude's models, maybe less.
Of those I feel that Kimi K2.5 Thinking is probably the closest to Claude Opus 4.6. The rest are quite good at most things, and Qwen3 Coder Next is very fast. Qwen3 Coder Next also has the potential to run on "reasonable" local hardware.
Re: (Score:2)
I think in the next year we'll see some of the open source models catch up with and surpass the closed models in a real way. Even if they get to Opus 4.5/4.6 parity (I personally think 4.5 is markedly more predictable in its behavior and makes a better "daily driver" than 4.6), that's still more than good enough to daily drive for almost all tasks. Tools like opencode make agent-specific models not only tenable but practical, to boot.
This is still fairly early stage. You've got to run things at scale and se
Updated Schedule (Score:2)
June: IPOs
November: Aaaand it's gone.
Re: (Score:3)
Usually an IPO gives a cash burning startup enough money to stay afloat for at least a year. Although, with the way Open AI is burning through cash building ALL the data centers, they might go through it quicker than that.
So, that said... we should expecting the next great tech stock implosion to happen around mid 2027?
Re: (Score:2)
But I think around this November, when it becomes clear that we're on schedule for that, the end will come quickly.
Change the tax laws (Score:3)
Such a dumb number.
If they want to it really mean something, then a company should be taxed on their valuation.
That's probably the most terrifying thing any of these companies would ever want to hear. Looking at you Tesla.
Tax them on their valuations, and we'll see those numbers drop down to something much more reasonable.
And the biggest companies will be the ones who are actually doing shit.
This fiction only serves to enrich a very few people.