Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
The Almighty Buck AI Businesses

OpenAI Says Its Business Will Burn $115 Billion Through 2029 (theinformation.com) 47

An anonymous reader shares a report: OpenAI recently had both good news and bad news for shareholders. Revenue growth from ChatGPT is accelerating at a more rapid rate than the company projected half a year ago. The bad news? The computing costs to develop artificial intelligence that powers the chatbot, and other data center-related expenses, will rise even faster.

As a result, OpenAI projected its cash burn this year through 2029 will rise even higher than previously thought, to a total of $115 billion. That's about $80 billion higher than the company previously expected. The unprecedented projected cash burn, which would add to the roughly $2 billion it burned in the past two years, helps explain why the company is raising more capital than any private company in history.

This discussion has been archived. No new comments can be posted.

OpenAI Says Its Business Will Burn $115 Billion Through 2029

Comments Filter:
  • Take cover (Score:5, Interesting)

    by dskoll ( 99328 ) on Monday September 08, 2025 @10:44AM (#65646216) Homepage

    When the AI bubble bursts, it's going to be wild.

    • Re:Take cover (Score:4, Insightful)

      by TWX ( 665546 ) on Monday September 08, 2025 @11:00AM (#65646252)

      Mmmhmm.

      I get that technology has always been a disruptive influence in human effort, where everything from the plow to electronic business records has reduced the number of labor-hours required to achieve a result or has enabled results that couldn't be achieved without the technology, but it's been pretty universal that there's been someone who's been ultimately responsible for using the technology correctly, and that the only technologies that really survive are those that produce reliable, consistent, verifiable results.

      The latter is tied to the former. If the results are not reliable, consistent, and verifiable, then those who use the technology end up being the ones who get into trouble when they get garbage results. AI right now stands in stark, stark contrast, where it's not reliable, it's not consistent, and it's not verifiable. The AI companies will undoubtedly fight tooth-and-nail to keep it from being subject to verification checks too, both because they don't want their particular 'special sauce' being reversed by others, and because they don't want to reveal just how hot-garbage their underlying code is.

      I have no doubt that when applied to niche problems by professionals that already have deep understanding in their fields that it can be very useful. The recent protein folding project demonstrated that researchers and scientists were able to use a built-for-purpose AI as a tool to achieve results that their brute-force calculating methods were not providing quickly. But again, those were professionals in a field of study with a deep understanding of both the problem and the nature of solving it, with the ability to verify results through independent means.

      AI for the masses is going to lead to middle-managers with egg on their faces and businesses losing revenue when the magical solution whose promises sound like they come right out of zombo.com ultimately let them down, providing garbage results, corrupt datasets, and expensive rehiring processes to bring humans in to fix what AI broke. And it will be glorious to watch.

      • Re:Take cover (Score:5, Insightful)

        by dskoll ( 99328 ) on Monday September 08, 2025 @11:11AM (#65646280) Homepage

        Most of the really useful applications of AI such as protein-folding, speech-to-text, image recognition for quality control or to aid in medical diagnostics are not LLMs.

        It's LLMs specifically that are hot garbage, and also that are being heavily hyped by the AI industry.

        • by gweihir ( 88907 )

          It's LLMs specifically that are hot garbage, and also that are being heavily hyped by the AI industry.

          Indeed. Even LLMs have some uses, but these nowhere even remotely to justify the money that gets burned on them.

        • Even LLMs are great for just the morass of shitty business data. Like we get PDF orders from a bunch of our customers, they don't order enough or in a standard enough format to automate it properly but most of the time an LLM can digest that, match up the products and produce a result at least as well as a human can. There's just not much opportunity for it to go wrong, the product list is constrained, we also have it report out all the totals from the PDF so if the dollar amount doesn't add up we won't ac
        • Most of the really useful applications of AI such as protein-folding, speech-to-text, image recognition for quality control or to aid in medical diagnostics are not LLMs.

          It's LLMs specifically that are hot garbage, and also that are being heavily hyped by the AI industry.

          I agree and here's why...

          The term used to be "expert system". You'd write an expert system that could process hydraulic pressure analysis to run a factory. Or an expert system that could model forest fire spread to tell you where to send tanker airplanes to thwart it. The software in your car that watches sensors to model its surroundings and warn you about collisions... an expert system. Text-to-speech? Speech-to-text? Expert systems. All designed to do a specific task using data relevant to that

          • An expert system [wikipedia.org] is something different. It's a form of AI where the logic is explicitly coded, and is meant to reproduce the logic that a human uses. To create one, you begin by interviewing a human expert, ask them to describe their process for thinking through a problem, and then try to reproduce their process in code.

            Most modern AI is based on machine learning, which is a very different approach. No one codes in what the logic should be. You create a generic model that's flexible enough to allow alm

            • An expert system [wikipedia.org] is something different. It's a form of AI where the logic is explicitly coded, and is meant to reproduce the logic that a human uses. To create one, you begin by interviewing a human expert, ask them to describe their process for thinking through a problem, and then try to reproduce their process in code.

              Most modern AI is based on machine learning, which is a very different approach. No one codes in what the logic should be. You create a generic model that's flexible enough to allow almost any logic. Then you give it a huge library of training data, for example inputs and what the correct output should be for each one, and use an optimizer to adjust the model until it matches the training data.

              Expert systems were very popular in the 1980s. They're not used as much today. Machine learning has replaced them in most applications.

              Really the distinction you're making is between special purpose and general purpose AI. A simple model that does one thing well is often more useful than a complicated one that does lots of things badly. But companies like OpenAI are obsessed with the goal of "artificial general intelligence", trying to create one huge model that can do anything a human can do.

              Thank you for that clarification. My first exposure to the term was back in '92, in the form of a novel by Marvin Minksy (and Harry Harrison). I haven't closely followed the terminology since and wasn't aware of the structural differences. Now I am.

      • by Anonymous Coward

        There is no "code" per se in LLM AI. The data (models) are the only thing that matters. In that way they are the same as search engine businesses.

        As far as the bubble there are two likely outcomes: Most likely is all that money will be lost like in the dotcom bust. Less likely but relatively possible is due to the significant amount of money, it's possible AGI could emerge and kill us all.

      • by jonadab ( 583620 )
        > and because they don't want to reveal just how
        > hot-garbage their underlying code is

        Code quality isn't the issue. I have no idea what their code quality is, it might be fine, it might be terrible, but the reason I can't tell is because that has nothing at all to do with the problem with their results.

        The fundamental problem is that they've been actively trying to convince a lot of people, up to and including their shareholders, that the product is a fundamentally different thing than what it actual
    • by gweihir ( 88907 )

      Yep. And nobody, except some suppliers and some individuals, will have made a profit. It is like crapto all over, tons of value just destroyed and nothing to show for it. Or, come to think of, all those insane gold-rushed of the past. The idiots never learn.

    • Watch the energy markets

  • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Monday September 08, 2025 @10:59AM (#65646250) Homepage

    Just waiting for $200/mo price tag to drop once VC money runs out.

    • Re: (Score:3, Insightful)

      by wed128 ( 722152 )
      I suspect it'll eventually be per-prompt or per-token billing, because for heavy users even $200 a month would be a loss
    • by gweihir ( 88907 )

      It will probably be a bit higher as soon as they actually have to break even.

  • by PPH ( 736903 ) on Monday September 08, 2025 @11:17AM (#65646290)

    ... how to turn a profit.

    Then stand back when the data center power consumption spikes and it catches fire.

    • by ledow ( 319597 )

      You joke, but if the AI was actually AI... you would be able to do that.

      The AI should be the revenue generator, directly. You shouldn't need to snakeoil it into human hands as something to boost their existing revenue if you pay for it.

      If the AI was even vaguely intelligent, it could be let loose on the Internet and generate income directly - legitimately or not! - for its creators. Instead you have to entice users to come to you to use it.

      We know AI isn't intelligent because if it were... companies wou

      • by PPH ( 736903 )

        companies would just be running AI quietly and the money would be rolling in

        So much this.

        If I had some key technology or brilliant employees that enabled me to repeatedly kick my competitors ass, why would I tell them?

  • Who is going to receive this money? Seriously: who is looking to making obscene profit?
    • Re:$115,000,000,000 (Score:4, Informative)

      by burni2 ( 1643061 ) on Monday September 08, 2025 @12:05PM (#65646422)

      Many very high paid "architects" and "developers", and in the second line many cloud-companies and their employees, the -onlyone- company that provides the hardware NVIDIA, and so on .. .. so its basically a charity, with a party.

      And well its the party for the last day on earth .. as we know it.

    • by allo ( 1728082 )

      Some days ago there was news that Meta offered an AI engineer 100 million for the first year. The field is very lucrative right now.

    • mostly NVIDIA and other AI hardware, data center companies, some for running the models, some to aquire small future contenders , some will go to the programmers, but not "billions", more like one "b" for all, and a lot for c-suits, or something like that, a lot for keep the price artificialy low enough to keep everyone out, yeah, my english sucks
  • by fropenn ( 1116699 ) on Monday September 08, 2025 @12:19PM (#65646458)
    ...if it can con investors into giving them this much money.

    FTFY
  • This says that their revenue growth is going up... but locked behind the paywal, what *is* their revenue? Anthropic, in the lawsuit, says that their revenue is only $5B.

  • Yeah, you're going to burn through the money really quickly.

  • by wheelbarrow ( 811145 ) on Monday September 08, 2025 @04:24PM (#65647094)
    This projection argues that the price to pay in the AI innovation game is at least $100 billion. This means the war will be between a handful of mega-corps rather than innovative startups. Has any other disruptive tech cycle in history be so exclusively top heavy? This is why I am convinced someone, a startup, will burst the bubble from the low end.
  • by Growlley ( 6732614 ) on Tuesday September 09, 2025 @03:19AM (#65647902)
    like every one else?
  • by ledow ( 319597 )

    I keep saying it:

    When AI investors want to see their return, and those companies are forced to charge even cost-price for their services... and your ChatGPT subscription goes up by an order of magnitude... people will start to realise that it's really not worth several Netflix subscriptions every month for something to Google the answer you're after and then present it in the form of a limerick.

    We're in the sunk-costs, loss-leader phase right now... and OpenAI don't even have a single profitable tier of the

It is clear that the individual who persecutes a man, his brother, because he is not of the same opinion, is a monster. - Voltaire

Working...