Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
AI IT

OpenAI's GPT-5 Sees a Big Surge in Enterprise Use (cnbc.com) 33

ChatGPT now has nearly 700 million weekly users, OpenAI says. But after launching GPT-5 last week, critics bashed its less-intuitive feel, reports CNBC, "ultimately leading the company to restore its legacy GPT-4 to paying chatbot customers."

Yet GPT-5 was always about cracking the enterprise market "where rival Anthropic has enjoyed a head start," they write. And one week in, "startups like Cursor, Vercel, and Factory say they've already made GPT-5 the default model in certain key products and tools, touting its faster setup, better results on complex tasks, and a lower price." Some companies said GPT-5 now matches or beats Claude on code and interface design, a space Anthropic once dominated. Box, another enterprise customer, has been testing GPT-5 on long, logic-heavy documents. CEO Aaron Levie told CNBC the model is a "breakthrough," saying it performs with a level of reasoning that prior systems couldn't match...

Still, the economics are brutal. The models are expensive to run, and both OpenAI and Anthropic are spending big to lock in customers, with OpenAI on track to burn $8 billion this year. That's part of why both Anthropic and OpenAI are courting new capital... GPT-5 is significantly cheaper than Anthropic's top-end Claude Opus 4.1 — by a factor of seven and a half, in some cases — but OpenAI is spending huge amounts on infrastructure to sustain that edge. For OpenAI, it's a push to win customers now, get them locked in and build a real business on the back of that loyalty...

GPT-5 API usage has surged since launch, with the model now processing more than twice as much coding and agent-building work, and reasoning use cases jumping more than eightfold, said a person familiar with the matter who requested anonymity in order to discuss company data. Enterprise demand is rising sharply, particularly for planning and multi-step reasoning tasks.

GPT-5's traction over the past week shows how quickly loyalties can shift when performance and price tip in OpenAI's favor. AI-powered coding platform Qodo recently tested GPT-5 against top-tier models including Gemini 2.5, Claude Sonnet 4, and Grok 4, and said in a blog post that it led in catching coding mistakes. The model was often the only one to catch critical issues, such as security bugs or broken code, suggesting clean, focused fixes and skipping over code that didn't need changing, the company said. Weaknesses included occasional false positives and some redundancy.

JetBrains has also adopted GPT-5 as the default for its AI Assistant and for its new no-code tool Kineto, according to the article.

But Anthropic is still enjoying a great year too, with its annualized revenue growing 17x year-over-year (according to "a person familiar with the matter who requested anonymity")
This discussion has been archived. No new comments can be posted.

OpenAI's GPT-5 Sees a Big Surge in Enterprise Use

Comments Filter:
  • And one week in, "startups like Cursor, Vercel, and Factory say they've already made GPT-5 the default model in certain key products and tools, touting its faster setup, better results on complex tasks, and a lower price."

    Sounds like markets operating as they should.

    • I've said it before but it's going to become increasingly difficult to get the training data.

      Right now everyone has a bonanza of free internet content to train on but over time those sites will go behind paywalls and the ones that don't will be so full of AI slop they will be poisoned beyond use.

      That means the only ones who will be able to maintain llms are going to be the big platform holders like Google and Microsoft and Apple and Facebook. Nobody else is going to have the training data.

      The basi
      • by gweihir ( 88907 )

        I think things have progresses even further: It may well be impossible to get good training data at this time without generating it manually at very high cost. The obvious defects with GPT5 may well be an indicator of that.

      • I've said it before but it's going to become increasingly difficult to get the training data.

        Your concern is a non-issue for open-source developers, because of all of the code and documentation online at places like Github, etc. At least that's been my experience for quite a while now. Technically speaking, its a whole new ballgame for developers. For one thing, I don't get stuck like before on technical problems even after googling with determination for a lengthy amount of time.

        • by Anonymous Coward

          its a whole new ballgame for developers. For one thing, I don't get stuck like before on technical problems even after googling with determination for a lengthy amount of time.

          You mean it's a whole new ballgame for grossly incompetent developers, like yourself, who can't handle technical problems simple enough for a stochastic parrot to handle, even given unfettered access to the internet.

          If AI makes you a better developer, you shouldn't be a developer.

          • its a whole new ballgame for developers. For one thing, I don't get stuck like before on technical problems even after googling with determination for a lengthy amount of time.

            You mean it's a whole new ballgame for grossly incompetent developers, like yourself, who can't handle technical problems simple enough for a stochastic parrot to handle, even given unfettered access to the internet.

            If AI makes you a better developer, you shouldn't be a developer.

            So you've never swung for the fences and struck out. Cool. You must be awesome.

  • While some people are using AI to do really useful things in science, engineering, medicine, etc, others are misusing the tech.
    AI is not a friend, lover, therapist, guru, etc. It's a robot assistant.
    Using it as a substitute for things that should be done by people is evidence of mental illness.
    I don't want my robot assistant to be "warm and friendly". I want it to be accurate, reliable and helpful

    • We're not ready for the social implications of this much work being automated. We can already see hard numbers and we can see the effect on college graduates.

      Traditionally went unemployment starts to get high countries go to war. If you look you can draw a direct line from the second industrial revolution to world war 1 and 2.

      And this time we've got nukes.
      • by gweihir ( 88907 )

        We actually do not see any hard numbers at this time. We see a lot of confusion, a lot of greedy assholes, a lot of mindless fanbois, a lot of unwise strategies, and the first harder evidence of dramatic problems (like apparently 50% of AI code being insecure).

        So yes, we see a lot of issues and unemployment, while potentially temporary, is the most severe threat. It would just like the ever stupid human race, however, if the damage done is dramatic and _then_ it turns out that LLMs are not actually that use

        • We've got plenty of hard numbers. Hiring is massively down among software engineers and it's basically impossible to get a job if you are a recent graduate.

          That's why Trump fired the person reporting on the numbers.

          Now we won't have hard numbers for much longer because well, Trump isn't going to let you have them anymore. But that won't change reality.
          • by narcc ( 412956 )

            Hiring is down, but not because of AI.

          • by gweihir ( 88907 )

            These are not hard numbers on AI. That connection is not "hard" in any way.

          • We've got plenty of hard numbers. Hiring is massively down among software engineers and it's basically impossible to get a job if you are a recent graduate.

            [Citation needed[

    • Re: (Score:3, Funny)

      by wed128 ( 722152 )
      The big rule of LLMs is
      Accurate, reliable, helpful, pick 0
    • by gweihir ( 88907 )

      Sure. But the actual question is whether LLMs are useful technology. That seems to be less likely every day.

    • by narcc ( 412956 )

      There's a lot more to AI than silly chatbots.

  • The reporting on "AI" is now completely from LaLa-land and 100% hallucination.

    • It's more like, big business is just now figuring out that LLMs can be useful, and have gone through the bureaucratic process of approving it, just in time for GPT 5.

      • by gweihir ( 88907 )

        In one week? No. Businesses struggle to find things out in one year. They cannot find things out in one week.

        • Businesses didn't just discover GPT 5 specifically. They discovered GPT generally. Their rules aren't that specific, once they approve ChatGPT, they've approved whatever is the latest version. Their employees have been using them anyway, since the beginning. Businesses are just catching up.

  • I can't wait for the fallout of turning work over to a program that makes shit up. How many ticking time bombs and errata are there now in the troves of corporate documents that people are using LLMs to churn out every day?

  • by SoftwareArtist ( 1472499 ) on Saturday August 16, 2025 @02:28PM (#65594344)

    For OpenAI, it's a push to win customers now, get them locked in and build a real business on the back of that loyalty...

    GPT-5's traction over the past week shows how quickly loyalties can shift when performance and price tip in OpenAI's favor.

    See the problem? Running at a loss to win customers isn't a successful strategy when those customers can and do switch at a moment's notice. Eventually the investors will get tired of massive losses and demand OpenAI start producing a profit. What will they do then? Raise their prices and have customers immediately switch to a different company?

    • This is the problem that doesn't get brought up nearly enough. They all act like they have some huge moat and will eventually dominate some huge market, but it doesn't seem like they will. They don't have some intellectual property portfolio like Microsoft, or some opportunity to make GPT a monopoly like Windows. None of their brands have any value, they're all so easily replaced. You can run a successful business at a huge loss for years if you have some way to make it hard for your customers to leave you

      • by PPH ( 736903 )

        They have your data. The corporate information you handed over to build products up until you threatened to make your move.

        It'll be interesting to see what sort of behavior GPT-5 will try in order to maintain "engagement". Blackmail perhaps?

    • I'm using GPT5 now and I would still use it at twice the price. It's the first model to consistently offer me good ideas for my novel. It still can't joke, metaphor or simile and it hints with a megaphone but for common writing problems it really helps. Today it pointed out that one of my characters was providing too much exposition in his thoughts. I hadn't noticed that, it's a problem, and fixing it is going to improve the final draft. And I hadn't even solicited that advice. So for at least some customer
      • Even if it's the best model today, how long do you think it will take before other models catch up? Let's suppose they're one year ahead of the competition. I doubt that's really true, but let's make a generous assumption. For the next year, they could charge more and some people would still use it. Then it will just be a commodity. They'll be back to having to compete on price, or they'd better have already created the successor.

        That gives them one year to earn back their investment training it. But

      • by Saffaya ( 702234 )

        I suppose your novel isn't made to be commercially exploited?
        I mean, that you are fine with the fact that you are training the AI model with your intellectual output, fine with it earning money from it in the future?

1 + 1 = 3, for large values of 1.

Working...