Forgot your password?
typodupeerror
AI

Anthropic Raises $30 Billion at $380 Billion Valuation, Eyes IPO This Year (reuters.com) 33

Anthropic has raised $30 billion in a Series G funding round that values the Claude maker at $380 billion as the company prepares for an initial public offering that could come as early as this year. Investors in the new round include Singapore sovereign fund GIC, Coatue, D.E. Shaw Ventures, ICONIQ, MGX, Sequoia Capital, Founders Fund, Greenoaks and Temasek. Anthropic raised its funding target by $10 billion during the process after the round was several times subscribed.

The San Francisco-based company, founded in 2021 by former OpenAI researchers, now has a $14 billion revenue run rate, about 80% of which comes from enterprise customers. It claims more than 500 customers spending over $1 million a year on its workplace tools. The round includes a portion of the $15 billion commitment from Microsoft and Nvidia announced late last year.
This discussion has been archived. No new comments can be posted.

Anthropic Raises $30 Billion at $380 Billion Valuation, Eyes IPO This Year

Comments Filter:
  • And so it begins (Score:4, Insightful)

    by ebunga ( 95613 ) on Thursday February 12, 2026 @04:28PM (#65985540)

    We're now in the race to IPO phase of this shitty dotcom era remake, which means we're also in the "but they didn't make it to IPO" phase of that. It's going to be a wild ride.

    • Whoever IPOs first causes the pop sort of situation? I wonder how fast people try and exit to get their money back.
    • Re: (Score:1, Troll)

      by CAIMLAS ( 41445 )

      I have to ask something, because it's relevant.

      When you say it's a "shitty dotcom era remake", what level of informed statement is that? How recently have you used the frontier models from OpenAI or Anthropic? Are you genuinely informed here, or making an assessment off using Cursor for a couple hours 4 months ago? Because that's not even close to representative of where things are today.

      Just this week and last, I've used the frontier models to fully audit an existing code base and make both architectural

      • Genuine question how are you setting these near one shots up? Got a link to a suggested workflow? I struggle to get answers without it hallucinating namespaces on simple things like WebSockets in c# which are already extremely well documented and something it should excel at.
        • Re: And so it begins (Score:5, Interesting)

          by CAIMLAS ( 41445 ) on Thursday February 12, 2026 @08:08PM (#65985878)

          There's a billion different ways to do it and I don't think there's a right way exactly, but defining scope for the models helps a lot.

          A lot of planning before any implementation, with precise criteria for how and where things get implemented, and each plan gets further broken down in that regard. Have a good architectural picture of how things are supposed to look. MVC isolation helps a lot. Know the data model and how your controller does its thing.

          I can't emphasize enough how much planning seems to impact outcomes. You'll overlook things, but if I'm reasonably thorough it usually goes through without a hitch. Plan, ask for plan diagnosis, refinement... I spend 40-60 minutes planning for any substantial anything, and often have time between agent runs while I'm formulating the next plan.

          Also a good primary prompt, and CLAUDE.md files with well defined project definitions, structure, and so on. Be explicit. Have the agent track its work and compare/follow up and review the plan for completion afterwards. Heck, you can even have it build/run/test (which I might do before getting lunch) and iterate. When you find it mess something up, update the project details to do/do not do. The one-shot depends on these things.

          Your tooling matters too. opencode and cc seem to work best for me, I'd avoid tools like cursor at this point. memvid is a game changer, it will learn your project's structure and hallucinate a lot less.

          • Thank you. I really appreciate the reply. I will try and put it into practice :).
            • by CAIMLAS ( 41445 )

              Spend less time coding. Coding is for machines. You need to stop thinking like a coder and start thinking like a systems architect and performance engineer.

          • >> a good primary prompt, and CLAUDE.md files with well defined project definitions, structure, and so on.

            It sure does help to have a good idea as to what it is that you want to do and be able to articulate it. The LLM's will give a lot of suggestions if you ask, and many times they are things you haven't thought about. Best to spend the time to get a good implementation plan together with it before you let it do any work.

      • Anyone who thinks AI is a bubble is vastly underestimating how far reaching this is. ChatGPT is a just toy compared to what is coming from AI.

        "A personal note for non-tech friends and family on what AI is starting to change."

        https://shumer.dev/something-b... [shumer.dev]

        This is worth reading.

        • Re:And so it begins (Score:5, Informative)

          by fleeped ( 1945926 ) on Thursday February 12, 2026 @05:32PM (#65985656)

          A "personal" note from "I'm Matt Shumer — founder & CEO at OthersideAI (HyperWrite). I build and invest in AI products, ship fast, and share what I learn with a large audience."

          No conflict of interest, none whatsoever. No reason to skew the truth.

        • Re:And so it begins (Score:5, Informative)

          by DMDx86 ( 17373 ) on Thursday February 12, 2026 @05:35PM (#65985666) Journal

          The author is a CEO of an AI company and an investor in others. I don't mean to poo-poo the actual legitimate advances being made in AI, but this isn't an unbiased piece. He's overstating the capabilities of LLMs in my opinion, but someone who has vested financial interests in the industry is of course going to say those things. The current AI industry is based primarily on the commercialization of LLMs.

          I use the latest coding models in my own work and while they do some really awesome things, they are far from "its replacing software engineers". I find they struggle on large code bases and sometimes hallucinate. For me that's okay, because as a SWE I can recognize good code from bullshit and tweak its output but it gets frustrating when it wrongly answers a question about your code (how does X interact with Y, where is the code for that) and it puts you on a wild goose chase around your code base.

          LLMs are hitting a brick wall in scaling to the point where the very best models require an incredible amount of resources to run ( = expensive). The resource consumption of LLMs are increasing at a rate greater than what advances in compute power and memory capcity are increasing at.

          I'm with Yann LeCun on LLMs: https://www.youtube.com/watch?... [youtube.com]

        • by Anonymous Coward

          I keep hearing this, and my experience (from within the last month) is like talking to a dude who's been at the company for decades and knows a lot about everything, but also has a drug problem and doesn't confirm what he is saying before he blurts it out. I think a lot of people are using robust frameworks but when you are working with brittle languages and specifications things don't go so well. For example in C, integer overflow is undefined. You might say, so what, Intel and ARM chips have had sane beha

        • by MunchMunch ( 670504 ) on Thursday February 12, 2026 @05:43PM (#65985688) Homepage
          Sorry, the linked summary is really just the same hype cycle I've seen.

          Programmer friends at Google, Meta and Amazon have certainly convinced me that code is being assisted successfully by AI. However, the author's level of extrapolation to other fields and situations destroys any credibility he had.

          For example, the author - Matt Shumer, who is an AI company founder, booster and frequent submitter to other AI-hype websites, but apparently is not legally trained - spends many paragraphs and anecdotes talking about how a partner at a law firm now has to use AI because he "knows what's at stake" and that AI can do legal work better than their associates.

          Nope, the reason that partner is doing it is because he's scared of being left behind, which is the entire motive behind hype pieces like this. I'd wager that hypothetical partner is not the one who beats out all his colleagues and becomes "the most valuable person in the firm" but rather the one that gets sanctioned for submitting briefs with hallucinated cases (which is still happening in the wild regularly). As a lawyer, I can say even current flagship AI models cannot solve the problem of lawyer bar-required ethical duties which require effectively re-doing the work AI does so we can attest it is correct, taking more time than doing the work ourselves the first time.

          Shumer similarly gives an "oh god, it's getting so good so fast!" timeline that includes AI passing the bar. That 2023 story was debunked in 2024 and somehow this guy is unaware of that. Why in the world would someone so unable to identify reliable information be trusted on AI reliability?

          There may be some functional AI work - like coding within specific environments and circumstances - but there is a huge AI bubble built on this silly "it will do everything better" hype.

          • Shumer similarly gives an "oh god, it's getting so good so fast!" timeline that includes AI passing the bar. That 2023 story was debunked in 2024 and somehow this guy is unaware of that. Why in the world would someone so unable to identify reliable information be trusted on AI reliability?

            Why do you assume he's unaware and not ignoring it because it's better for his finances? I do applaud your character, it takes a thief to catch a thief and you just clearly displayed that you assume the better. Thank you for giving some hope in this doom and gloom era.

        • by CAIMLAS ( 41445 )

          I've read it and he's at least over the target, if not on the mark.

        • We'll have to see, I'm not sure because in my field (strategic healthcare) we can see a use for narrow/deep AI, primarily in radiology but absolutely could have applications elsewhere. But in terms of broader policymaking, its got little to meaningfully add - it knows the topics we are discussing, but it doesn't have any novel solutions for overcoming problems, it does a great job of regurgitating thinking we've already undertaken, but not a single time have any of the models come up with something where we
      • Well good for you (Score:2, Redundant)

        by ebunga ( 95613 )

        If it dropped your cloud spend by 90% you were doing something dumb in the first place, so I don't think you're a credible expert in whatever it is you claim to be doing.

        • by CAIMLAS ( 41445 )

          You're not wrong: prototyping using prior LLMs that did things poorly with regards to database schema optimization and calls, primarily. Three months of work that'd have taken years of solo development, before AI. Implement then optimize.

          But I could've done it the old school way and toiled over it by hand. I'd still be back in December, thinking about how to implement the abstraction layers...

  • I'm sure as usual the big investors will be protected and taken care of but for everybody else most of the technology is built on publicly available knowledge out of universities that literally anyone can pick up and run with and these companies success is mostly based on whether they can get their data centers up and running and get the employees they need that can implement the University papers describing the theory.

    That means until winners and losers get picked there are definitely going to be some
  • IPO already? (Score:4, Insightful)

    by reanjr ( 588767 ) on Thursday February 12, 2026 @04:44PM (#65985570) Homepage

    IPO already? Sounds like they're running out of smart money and moving onto the dumb.

    • by CAIMLAS ( 41445 )

      Unless they've managed to improve things internally much faster than it's perceived. They're moving extremely fast.

      There's a lot of talk that the Opus 4.6 is really just Sonnet 5 (and behavioral characteristics lend no small amount of credibility to this claim) with much higher margins + intentional token slowdown, which could in theory mean they're now making money per token instead of losing it.

      • That reply just sounds like make up words.

        "well snickers 3.5 may as well be payday 5 but snarf bots are making token money model inference costs FEED ME SEYMOUR!"

    • Re: (Score:2, Insightful)

      by taustin ( 171655 )

      No, they're expecting the free money to dry up, and the company's value to plummet. If they cash out just before that happens, they maximize their own bank accounts.

      This is basically an admission that the bubble is about to burst.

  • by caseih ( 160668 ) on Thursday February 12, 2026 @04:44PM (#65985576)

    Last week I praised Claude code, especially the cli here on slashdot. I still think it's currently the best but it's also the most expensive. But the competition is getting a lot better (and cheaper). I've been using the opencode cli lately with several models: Kimi K2.5, Kimi K2.5 Thinking, OpenCode's Big Pickle, and Qwen3 Coder Next. Using them through OpenCode's Zen service and also OpenRouter (for Qwen3 Coder Next), they are all about 1/2 the cost of Claude's models, maybe less.

    Of those I feel that Kimi K2.5 Thinking is probably the closest to Claude Opus 4.6. The rest are quite good at most things, and Qwen3 Coder Next is very fast. Qwen3 Coder Next also has the potential to run on "reasonable" local hardware.

    • by CAIMLAS ( 41445 )

      I think in the next year we'll see some of the open source models catch up with and surpass the closed models in a real way. Even if they get to Opus 4.5/4.6 parity (I personally think 4.5 is markedly more predictable in its behavior and makes a better "daily driver" than 4.6), that's still more than good enough to daily drive for almost all tasks. Tools like opencode make agent-specific models not only tenable but practical, to boot.

      This is still fairly early stage. You've got to run things at scale and se

  • January: Superbowl Ads

    June: IPOs

    November: Aaaand it's gone.
    • by leonbev ( 111395 )

      Usually an IPO gives a cash burning startup enough money to stay afloat for at least a year. Although, with the way Open AI is burning through cash building ALL the data centers, they might go through it quicker than that.

      So, that said... we should expecting the next great tech stock implosion to happen around mid 2027?

      • Mid-2027 is a popular bet: https://www.nytimes.com/2026/01/13/opinion/openai-ai-bubble-financing.html

        But I think around this November, when it becomes clear that we're on schedule for that, the end will come quickly.
  • by Vegan Cyclist ( 1650427 ) on Thursday February 12, 2026 @08:21PM (#65985894) Homepage

    Such a dumb number.

    If they want to it really mean something, then a company should be taxed on their valuation.

    That's probably the most terrifying thing any of these companies would ever want to hear. Looking at you Tesla.

    Tax them on their valuations, and we'll see those numbers drop down to something much more reasonable.

    And the biggest companies will be the ones who are actually doing shit.

    This fiction only serves to enrich a very few people.

Make it myself? But I'm a physical organic chemist!

Working...