Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI

Anthropic Launches Its Own $200 Monthly Plan (techcrunch.com) 37

Anthropic has unveiled a new premium tier for its AI chatbot Claude, targeting power users willing to pay up to $200 monthly for broader usage. The "Max" subscription comes in two variants: a $100/month tier with 5x higher rate limits than Claude Pro, and a $200/month option boasting 20x higher limits -- directly competing with OpenAI's ChatGPT Pro tier.

Unlike OpenAI, Anthropic still lacks an unlimited usage plan. Product lead Scott White didn't rule out even pricier subscriptions in the future, telling TechCrunch, "We'll always keep a number of exploratory options available to us." The launch coincides with growing demand for Anthropic's Claude 3.7 Sonnet, the company's first reasoning model, which employs additional computing power to handle complex queries more reliably.

Anthropic Launches Its Own $200 Monthly Plan

Comments Filter:
  • by allo ( 1728082 ) on Wednesday April 09, 2025 @04:21PM (#65293443)

    Unlimited plans are bullshit for things that have unlimited costs for unlimited usage. In the best case they still rate limit, in the worst case they cancel your account because of some fair usage definition that is nowhere explained because if they tell you the limits it would not be unlimited.

    • "Unlimited*"

      * is never unlimited, read the terms of service.

      You can't offer unlimited or you'll be raped by automated bots leveraging your cheap service.
    • Wait, you're saying I can't actually have an infinite number of breadsticks? Ridiculous.
    • by N1AK ( 864906 )
      Spoken like someone who gets caught up in theoretical semantic arguments instead of comprehending reality. The point of unlimited plans is that consumers like knowing how much something will cost, even if in theory the cost will likely be higher than PAYG and that companies really like subscription based income. For 99.9% of people unlimited services are effectively unlimited because they can use it as much as they want without issue; the fact that someone can't buy an unlimited gym membership and then move
      • by allo ( 1728082 )

        Just label it with 20000 queries per day and it is honest. People who want unlimited to feel good can think about how fast they would have to type and people who want unlimited to save money they would otherwise invest into pay-as-you-go services and would definitely issue more queries know what the plan actually means.

  • Hmmm. (Score:3, Funny)

    by jd ( 1658 ) <imipak@@@yahoo...com> on Wednesday April 09, 2025 @04:31PM (#65293461) Homepage Journal

    I had Claude and ChatGPT work with each other on a little engineering project, each finding the limitations in the design the other hadn't spotted.

    It was actually good fun. Cost me a bit to get all the technical info they both wanted, but I now have a design both insist is absolutely robust, absolutely perfect. But they also both tell me that it's too big to properly process.

    Yes, AI itself designed a project the very same AI cannot actually understand.

    I now have a very large file, that cost me a fair bit of money to produce, that I'm quite convinced is useless but no AI examining even part can find fault in.

    Beyond the fun aspect of having AI defeat itself, the project illustrates two things:

    1. If AI can't handle a toy specification, it's never going to be able to handle any complex problem. This means that the "pro" editions are not all that "pro". The processing windows are clearly too small for real problems if my little effort is too big.

    2. Anything either AI got right is, by virtue of how I worked on the problem, something the other AI got wrong. Of course, AI doesn't "understand", it's only looking at word patterns, but it shows that the reasoning capacity simply isn't there, regardless of whether the knowledgebase is.

    3. I've now got a quite nice benchmark for AI systems. I can ignore any AI that can't cope. If it hasn't got the capacity to handle any trivial problem, because the complexity is too high, then it won't manage any real problem better.

    • Time for your tie breaker.
      Gemini has a larger context window than Claude and ChatGPT- over a million tokens. You'll be able to squeeze your file in there ~10 times.
      • by gweihir ( 88907 )

        Not necessarily. Token needed can well increase exponentially for complex things, like specifications or code. The spec Gemini can process may well just be 5% larger than the other two, possibly even less.

        You cannot go deeper by brute force in any meaningful way. The automated theorem proving community found that out about 40 years ago. You can go broader, but for checking a specification that is useless. For a general search for something simple, it can work.

        • There are a couple of things to unpack here.

          Context Window = input + output
          As long as output It's that simple.

          The other 2 cannot do anything with the current machine-architected specification, because it's too big to fit in their context windows, because they both have small context windows.
          Gemini has a large context window.

          Your statement on depth and brute force doesn't make any sense in the context of a transformer model. What are you trying to say?
          • Whoops. Slashdot swallowed my bracket.

            As long as output < 10(input), then the context window is sufficient.
            It's that simple.
          • by gweihir ( 88907 )

            Your statement on depth and brute force doesn't make any sense in the context of a transformer model. What are you trying to say?

            That you are trying to do something with a transformer model it cannot do. Elements of a specification are interconnected.

            • That's not problematic.
              Self-attention computes context across the entire context window for every layer, every token.

              What is it you think that a transformer model cannot do, precisely?
      • by jd ( 1658 )

        Gemini struggles. I've repeated the runs on Gemini and can now report it says files were truncated two times in three. After reading in the files, it struggles with prompts - sometimes erroring, sometimes repeating a previous response, sometimes giving part of an answer.

        When it gives some sort of answer, further probing shows it failed to consider most of the information available. In fairness, it says it is under development. Handling complex data sources clearly needs work.

        But precisely because that is it

        • Interested in your experiment, I did some playing with Gemini as well.
          There's something funny going on there.

          It doesn't work in the traditional way it'd be done with a local LLM. Gemini is trained to use tools (emit a token that is parsed by the renderer to do actions).
          When you include files in it, where a local LLM would simply dump that file into the context window (as long as it fit), Gemini puts it in a virtual workspace creatively called "Workspace", and Gemini asks Workspace for information.

          Addi
    • by gweihir ( 88907 )

      That is a very interesting posting. In particular the limitations that these LLMs cannot analyze what they produced. Sounds a bit like the Wizard's apprentice and shows some rather harsh limits in these models.

      1. If AI can't handle a toy specification, it's never going to be able to handle any complex problem.

      Indeed. There is also not much room to scale things up. In fact these models have been at the maximum complexity possible for a while now, later changes are cosmetic. Even the "reasoning" models do nothing else besides looping their input back, something you essentially copuld do manually before. One

      • by jd ( 1658 )

        Clause and ChatGPT manage to read the files but the context window is too small to process them.

        Grok, DeepSeek, and amazingly even Gemini choke on the files. Gemini, multi-million-token windows notwithstanding, reported that the specification files (of which I've 13, plus another 9 containing contextual information) are too big to process, even though they're only about 20k of tokens each.

        • by jd ( 1658 )

          I'll be fair and say Gemini Advanced read more of the files in, but still choked on on file read.

      • by jd ( 1658 )

        So, to answer your question, none of the AIs I've tried can cope. There may be AIs I've not tried, ideas welcome, but the AIs just don't do what they claim in terms of data digesting, which leads me to conclude that the hidden overheads underpinning their methods are too large.

        • Context window is context window.

          If Gemini is refusing to ingest your files in spite of its claimed 1M+ context window, then it's not really putting your file into the context window. It's trying to do some kind of pre-crunching/summarization of it, and that's what's choking.

          This is strange behavior, as the behavior of file insertion into the context window usually only resorts to such things if the context window is too small to handle the amount of data you're trying to put in it.

          By what method are
          • by jd ( 1658 )

            I'm using the web interface, using the file attachment button then selecting documents. The prompt used explicitly instructs using the largest input window available and to report any file that failed to be handled correctly. It gives the OK for most of the files, then reports several got truncated.

            Yeah, if it was something simple like input exhaustion, they'd not be making the claim. That would be the first thing anyone tested. So I'm reasoning that the problem has to be deeper, that the overflow in Gemini

            • I'm using the web interface, using the file attachment button then selecting documents. The prompt used explicitly instructs using the largest input window available and to report any file that failed to be handled correctly. It gives the OK for most of the files, then reports several got truncated.

              What would you estimate the total size of the files is, in words?

              Yeah, if it was something simple like input exhaustion, they'd not be making the claim. That would be the first thing anyone tested. So I'm reasoning that the problem has to be deeper, that the overflow in Gemini's case is in the complexity and trying to handle the relationships.

              For the LLM, no, that's not a thing.
              If there is some kind of pre-processing step they do, then that might be a thing.
              The transformers determine attention at each step during inference, and it's selected by each attention layer before each feed-forward layer- you can't "overflow it with ideas" or anything like that. If the neural network couldn't handle the amount of concepts contained within, it would simply focus on the ones it thought most

              • by jd ( 1658 )

                I would estimate 190,000 words in total, and reading the files one at a time and asking for a count of concepts, the total seems to be around 3,000.

    • I've tried that too, comparing outputs of different models , and finding useful differences. Much more modest complexity though, but also noted it was often forgetful of the previous contexts. My takeaway was that the AIs could reduce alot of syntax and formatting drudgery but not make good design decisions. So I work iteratively, then integrate into a larger work flow or architecture.
  • I know they say higher rate limits, I but can't help worrying if this will be like Google did with Colab Pro, where you have to pay more to keep the capabilities you were using, and they nerf the pay tier you were using?

    • I don't know about that but I can't see getting locked in to an AI model like I am with email for example. I was using Claude last week and I'm using Gemini 2.5 Pro this week and transition is smooth enough that I won't mind moving again in a month.
  • They somehow got a million people to subscribe to this level, they are still looking at decades before profitability.

    • by gweihir ( 88907 )

      They will not get profitability, ever. But they are selling a vision, not an actual product, so the people involved may well get out rich.

    • I don't think you know what profitability means.

      Are you perhaps thinking that they have to pay their funding back to be profitable, or something?

      Anthropic pulls in $1.4B/yr right now. What you propose would add an additional $2.4B/yr, bringing them to $3.6B/yr.
      We can't know how profitable that is, because we don't know what their costs are. However, being they were just handed $3-something billion in Series E funding, someone thought their numbers looked good.

      And frankly, I doubt they're pricing thei
  • The Pro plan is only $18/month and you can do quite a bit of work with it. Someone who needs $200 worth of credits must be flogging the AI unmercifully.

    • Sure, this begs you to invent ways to use it. How about my service calls your service? Like a back end to a webbased chatbot service. So there is an incentive to ramp up the traffic as the cost goes down .
      • I'm thinking it is more like "my team of developers eats through our AI assistance credits by lunch time so I want to buy bulk".

        • Here's the problem with that. We know devs are lazy, in the programming way. Shortcuts and efficiency are rewarded. So what I see with my limited time using LLMs for coding is, this process: first, you ask for the moon. Then the thing hallucinates what your vague request was and barfs up obvious garbage. Then you break the problem down yourself a bit and ask for a bit less. The model produces better results. Iterate. Ask for even less. Bingo: you get good output. The balance point between the moon and what
  • And do they pay residuals to the creators of the content they tossed into their model for each response that contains the works of others?

  • No thanks, I don't want Claude filter my Web access and reporting me to the mothership for violations of woke scripture.

"Ask not what A Group of Employees can do for you. But ask what can All Employees do for A Group of Employees." -- Mike Dennison

Working...