Forgot your password?
typodupeerror
AI

OpenAI Partners Amass $100 Billion Debt Pile To Fund Its Ambitions (ft.com) 77

OpenAI's data centre partners are on course to amass almost $100 billion in borrowing tied to the lossmaking start-up, as the ChatGPT maker benefits from a debt-fuelled spending spree without taking on financial risks itself. Financial Times: SoftBank, Oracle and CoreWeave have borrowed at least $30 billion to invest in the start-up or help build its data centres, according to FT analysis. Investment group Blue Owl Capital and computing infrastructure companies such as Crusoe also rely on deals with OpenAI to service about $28 billion in loans.

A group of banks is in talks to lend another $38 billion for Oracle and data centre builder Vantage to fund further sites for OpenAI, according to people familiar with the matter. The deal is expected to be finalised in the coming weeks. OpenAI executives have said they plan to raise substantial debt to help pay for these contracts, but so far the financial burden has fallen to its counterparties and their lenders. "That's been kind of the strategy," said a senior OpenAI executive. "How does [OpenAI] leverage other people's balance sheets?"

This discussion has been archived. No new comments can be posted.

OpenAI Partners Amass $100 Billion Debt Pile To Fund Its Ambitions

Comments Filter:
  • Long con (Score:5, Insightful)

    by fluffernutter ( 1411889 ) on Saturday November 29, 2025 @09:36AM (#65824475)
    How do I get a job tricking wealthy people into giving me billions of dollars for a technology with no real end game?
    • Re: Long con (Score:5, Insightful)

      by reanjr ( 588767 ) on Saturday November 29, 2025 @10:05AM (#65824501) Homepage

      Step 1: join Y Combinator.

    • Tell them that in 2 years you'll reach AGI. And 2 years later, in 2024, tell them again.
    • Tell them, "I got an invention that'll eliminate half of all the jobs in the world, then my investors get to keep all the money we used to pay them!"

    • by gweihir ( 88907 )

      Indeed. For some reason the idea of slaves (whether humans or machines) also sells really well to some people of low/negative worth as persons, some with tons of money.

    • Used to be that Ponzi schemes had to at least pretend to show some semblance of a profit to keep the suckers paying in. Today an entertaining story is enough.

      • There are a lot more people in the world today then 40 years ago. You don't need to fool nearly as many as before.

  • It could be worth it (Score:5, Interesting)

    by phantomfive ( 622387 ) on Saturday November 29, 2025 @10:06AM (#65824503) Journal
    If they end up somehow building strong AI, then the investment will pay off in huge multiples and will absolutely be worth it.

    If they don't manage to create strong AI, but manage to create a better search engine that somehow replaces Google, then it will be worth the investment (for comparison, Google profit is on the order of $100 billion per year).

    There are a lot of other potential products that could bring heavy revenue, even without strong AI. AirBnB has $2billion a year in net profit, which isn't great but it's conceivable that even with the current crappy AI product, OpenAI could make a reasonable amount of revenue. With billions of potential customers, they don't need to make a lot of money off each person.
    • by coopertempleclause ( 7262286 ) on Saturday November 29, 2025 @10:25AM (#65824517)
      The problem is the amount of enshittification that ChatGPT will have to underdo to make any serious amount of money.

      A Google search is going to cost fractions of a cent, whereas some estimates put a ChatGPT query as costing around 36 cents.

      Assuming that they maintain their free tier, that's a hell of a lot of expenses to offset before you even start to dent other expenses and start to approach profit.

      So once the "growth phase" is done, I would expect ChatGPT to be quickly stuffed full of every type of advertising you can imagine.
      • by Ed_1024 ( 744566 ) on Saturday November 29, 2025 @01:06PM (#65824803)

        That is just one of the major underlying problems IMO. If ChatGPT et. al. were charged out at a modest profit, who would pay for these services that most people who are using LLMs non-industrially get for free at the moment? Watch an hour of ads to get one animated gif or the answer/hallucination to some trivia? $300pm Basic Tier with reduced ads, $1000pm no ads?

        It smacks of the old: we lose on every sale but we are going to make it up in volume! It is not like software where the development costs are amortised across future sales and hosting is cheap as an ongoing cost - if queries average 36c that equates to ~2kWh or >7MJ of energy every time someone hits the button. 18 queries and that is a gallon of petrol...

    • The way it goes, pretty soon any company that want to survive will have to use AI. Will it be from OpenAI, that's another story.
    • by allo ( 1728082 )

      Is there even a good business model for superintelligence?

      I think they will get to the "Google" model. Since it showed that AI chats and search engines are to converge, it is pretty clear what the main business model will be. Chat (and the voice assistant version of it) will become the primary way to get information, both knowledge and problem solving, with the service providing a combination of search and AI inference. I'd say Perplexity is undervalued, but they might be one of the pioneers that will be fo

      • Is there even a good business model for superintelligence?

        A pesticide for any planets crawling with pesky lifeforms you want to get rid of?

      • Is there even a good business model for superintelligence?

        It does all the work.

    • by gweihir ( 88907 )

      If they end up somehow building strong AI ...

      We can stop right there, because that is not happening anytime soon and LLMs will NOT be the way there if it ever happens.

      • Why should we trust your unsupported hallucinations over the consensus? Mood affiliation?

        • by gweihir ( 88907 )

          Funny, how you use cheap manipulation techniques in your answer. Makes me think you have absolutely nothing except a big ego.

          • Are you so unaware that you can't see I'm using tit-for-tat?

            • by gweihir ( 88907 )

              You really think that? Wow, talk about reeeeeally clueless.

              • Recently I've seen that Blue Trane copy-pastes some of his posts from LLMs.

                The only thing more contemptible and perverse than a bot programmed to pretend to be human, is a human voluntarily diminishing himself to be merely another repeater node for a bot which is trying to pretend to be him, in some kind of inverted Inception which obviates Selfhood.

                It's like two people playing some turn-based game over a network where both of them are using the same cheat software to attempt to win against the other. In th

      • LLMs will NOT be the way there if it ever happens.

        Yeah that's true, I agree.

        because that is not happening anytime soon

        How can you be so certain? Strong AI algorithm could be invented/discovered any time.

    • by evanh ( 627108 )

      AGI isn't anywhere in sight, and never was, let alone anything better. There is no sign of anything intelligent.

      OpenAI ain't going to supplant established players. Google has already got ahead on their end. Operating a more efficient search solution. Anthropic has already captured programming so they've got their work cut out to compete there too.

      They have to find a new niche but most uses of LLMs don't require large centralised data centres at all. Transcribing and form-filling for closed datasets can

  • by Sethra ( 55187 ) on Saturday November 29, 2025 @10:39AM (#65824539)

    GNCA covered this in detail. There's some insane financial reach arounds going on with everyone promising billions to everyone else in order to pump up the stock pricing:

    Worth the watch:
    https://youtu.be/h3JfOxx6Hh4 [youtu.be]

  • by rsilvergun ( 571051 ) on Saturday November 29, 2025 @11:03AM (#65824583)
    AI was going to solve so that was worth a trillion dollars.

    Wages. The problem AI is designed to solve is paying wages.
    • by gweihir ( 88907 ) on Saturday November 29, 2025 @11:24AM (#65824639)

      No. The "problem" that LLMs are pushed as "being able to solve" is wages.

      First, wages are not actually a problem. If you want a market, people need to be able to buy products. Second, LLMs have so far proven incapable and incompetent in anything that would have been a real killer. They will continue to do so.

      Hence we will get (if the manage to bring down the cost massively, which is a big if) somewhat better search that occasionally returns total crap, cheap generation of low quality graphics and music and, maybe, some specialist LLMs that solve some actual problems, but nothing earth-shattering.

      • Want to market? You need to read up on the history of antitrust law or literally pay attention to anything that's going on in the economy right now. Capitalists especially the billionaire ones do not want a market they want absolute control and power.

        Billionaires are in the process of dismantling capitalism and replacing it with a feudal system. With themselves as the Lord's and machines is the peasantry. They will have a handful of scribes in the form of engineers keeping the machines running and a han
        • by gweihir ( 88907 )

          That will not work.

          • To make it happen. You need to come up with a better answer than "that won't work".

            Explain to yourself why it won't work. If you can't do that then you are just coping with the coming crisis.
            • by gweihir ( 88907 )

              Have a look at history. I recommend the French revolution, in particular. Point is, you can keep people in poverty, but putting them there is something that rarely works on mass-scale. If it has ever worked at all.

              • Are you saying native Americans are too small to count?

                • by gweihir ( 88907 )

                  That is a special case and it required massive bloodshed, a relatively small group having it done to them and was basically about freedom, not direct possession. Still an atrocity, but not one relevant for the current discussion.

              • Counterpoints: The Great Depression, the impoverishment of the Luddites, the impoverishment of Gen. Y/Z/Alpha. The French revolution was an outlier, and the French aristocracy didn't even have the benefit of a massive heavily automated surveillance apparatus, much less the ability to even dream of armed killbots.

                • by gweihir ( 88907 )

                  There are factors on both sides that are hard to evaluate. The Great Depression, for example, was not intentionally engineered. (I am sure there are conspiracy theories that say otherwise, but really, the complexity of doing so is beyond the human race still today.) The Luddites were a comparatively small group. Gen. Y/Z/Alpha is not targeted action, but more a long boom cycle coming to its end and too many greedy assholes refusing to adjust and making it worse. A surveillance apparatus is only useful when

    • That might be the pitch, but it's far from clear that the cost of AI will be less than...paying wages. AI products are NOT CHEAP, despite all the "free" loss leader AI chatbots like ChatGPT. For business use, each purpose-specific AI requires steep subscription fees, often paid per token.

      • It's not about cost it's about dependency. As it stands if you're a billionaire you are completely dependent on employees and consumers for your wealth and prestige and power

        They don't like that. They don't like that at all.

        So they are more than happy to spend more resources especially since they have unlimited resources because we let them have unlimited resources.

        When I say that they are dismantling capitalism this is what I mean. It means that profit and loss are no longer the driving motivat
        • "we let them have unlimited resources."

          Why shouldn't we let ourselves have unlimited monetary resources, too?

        • They don't like that. They don't like that at all.
          So they are more than happy to spend more resources especially since they have unlimited resources because we let them have unlimited resources.

          You sound like you've been watching too many YouTube influencers.

          What exactly don't "they" like about deriving power and prestige from employees and consumers? That would seem to be exactly what they DO crave!
          What unlimited resources are you talking about? There is no such thing as unlimited.

      • by allo ( 1728082 )

        Did you ever look into what you can get as local models? Usually the latest open weight models are at most 6 month behind the current state of the art models. Soon they will converge against some "good enough" state, just like some people buy expensive adobe products but most can do their day job using open source alternatives. And their boss might finance them the Adobe subscription as bonus or because he believes that it still gives them a slight edge compared to the competition, but most normal users won

        • First, the specialty AI products, like the ones that look for cancer in X-rays, for example, do not provide local models. These specialty products aren't just a function of the model, the model is just one piece of the larger product.

          Second, there will always be a need for models that take into account current events, such as the results of an election that just happened.

          • by allo ( 1728082 )

            Models are not made for current events. Even the big companies deploy models with knowledge cutoff of 6 months and more. You add data about recent events by letting the model access databases, web search, etc. or proactively add these to the request.

            In the backend this could read like this:

            System: You are Joshua, a super intelligent AI answering stupid user questions.

            You have the following tools:
            - web search(search term) -> 10 results of related webpages
            - fetch(URL) -> content of a webpage
            - chess(curr

            • "Current" can easily be read to be six months old. The point is not the precise definition of "current" but a recognition that models do need to be updated as time goes on. A model from a decade ago is probably not so good today.

              • by allo ( 1728082 )

                The point is, that we will hopefully get to a state where new models are released when they are more clever and not just as a knowledge update. LLM knowledge is unreliable and inefficient (You'll find even in topics that seem overfitted many gaps) and retrieval is cheap and has measurable guarantees. Your current Wikipedia dump is as correct as current Wikipedia and you can check articles you suppose to be wrong. LLMs retrieving content from them will also have more reproducible results across different see

  • **A Market Theory of Money (Oxford, 1989)** â" longer exposition of the same idea

    * In this later book Hicks develops the âoecredit â' moneyâ story in more detail: how a creditorâ(TM)s acceptance of a **bankâ(TM)s promise to pay** effectively turns that promise into âoequasi-moneyâ and, as acceptance widens, into money proper. A representative sentence from the book (OCR/snippet) reads: *âoeâ¦he accepts the bank's promise to pay as

    • Can you reformat that point in plain ASCII?

      here is just Point 2, stripped down to plain ASCII, ready for Slashdot posting:
      From John Hicks, A Market Theory of Money (1989):
      "He accepts the bank's promise to pay as being as good as money. At first there may be confidence only within a narrow circle. But as time goes on, the circle widens. The bank notes become a quasi-money, and eventually come into general use."
      Meaning in one sentence:
      Credit becomes money when enough people trust and accept the promise as pay

    • by gweihir ( 88907 )

      This is actually pretty old. There are no working safeguards in place. Maybe when the current AI hype crashes half of the world economy when it ends, we may get those safeguards. Or not.

  • I wonder who will get all these data centers on the cheap.
    • It's not if, but when.

      Unfortunately, all those data center machines will be obsolete by then, and probably just head for the landfill.

      • I'm thinking sometime between right now and late 2026. It may be starting to pop already.

        The guts of the data centers will mostly head for the landfill (or may get a short stint as cryptocurrency mining or HPC operations at most), they may get repurposed as conventional data centers or possibly warehouses or factories.

    • Depending on when it pops many of the datacenters may not have been built. The ones that have been built might not have power, and (in the USA) by the time all those big Westinghouse reactors and small SMRs have been built the datacenters might be obsolete and not worth updating without big AI customers. So there's a possibility that they'll just end up abandoned building complexes in the desert.

      • I'm really hoping it pops before the Pitt Race track gets bulldozed.

        Story for those not in the know, what's heavily rumored and circumstantially almost certain to be an AI datacenter operation is in the process of buying out Pitt Race at the height of its success from the already generationally wealthy family that owns it for what's rumored to be a 9-digit sum. The race track happens to be next to some major electrical infrastructure. Equipment from the track has already been auctioned off.

        I was also kind o

  • Will investors get a return on their investment? I have my doubt's, with the flaws and limitations of AI and ChatGPT are the way they are, it is not completely worthless since I been using it for search queries AI provided some really good well written answers, the best I can predict AI doing is combining with robotics for many menial repetitive tasks like warehouse labor and those robotic dogs seem cool too, and I am sure the military industrial complex is drooling over it too, but since Sam Altman doesn't
  • Not For Private Gain [notforprivategain.org]

    “Nearly a year after its original proposal, OpenAI has completed its corporate restructuring. Before agreeing not to object to the restructuring, the Attorneys General of California and Delaware extracted 20 concessions from OpenAI, enumerated in memoranda of understanding.”

    “In light of those concessions and OpenAI’s corporate filings, we find that the outcome — while still inherently contrary to OpenAI’s charitable mission — is a substant

1 Angstrom: measure of computer anxiety = 1000 nail-bytes

Working...