Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Graphics

OpenAI Sam Altman Says the Company Is 'Out of GPUs' (techcrunch.com) 53

An anonymous reader quotes a report from TechCrunch: OpenAI CEO Sam Altman said that the company was forced to stagger the rollout of its newest model, GPT-4.5, because OpenAI is "out of GPUs." In a post on X, Altman said that GPT-4.5, which he described as "giant" and "expensive," will require "tens of thousands" more GPUs before additional ChatGPT users can gain access. GPT-4.5 will come first to subscribers to ChatGPT Pro starting Thursday, followed by ChatGPT Plus customers next week.

Perhaps in part due to its enormous size, GPT-4.5 is wildly expensive. OpenAI is charging $75 per million tokens (~750,000 words) fed into the model and $150 per million tokens generated by the model. That's 30x the input cost and 15x the output cost of OpenAI's workhorse GPT-4o model. "We've been growing a lot and are out of GPUs," Altman wrote. "We will add tens of thousands of GPUs next week and roll it out to the Plus tier then [] This isn't how we want to operate, but it's hard to perfectly predict growth surges that lead to GPU shortages."

This discussion has been archived. No new comments can be posted.

OpenAI Sam Altman Says the Company Is 'Out of GPUs'

Comments Filter:
  • What a pointless, self-serving statement.

    Everyone in this room is now dumber for having listened to it(you). I award you no points, and may God have mercy on your soul.

    • "Thou shalt not make a machine in the likeness of the human mind." -- Rayna Butler, a survivor of the Butlerian Jihad

      (from the Dune prequel "The battle of Corrin")

      • by xevioso ( 598654 )

        I never understood this in that universe. Like, what is that even supposed to mean, "likeness"? Were early computers like the human mind? I mean, in the movies and in the recent Prophecy series, it's pretty clear that there are computer-like objects around and machines that would require computers to work.

        • Spoilers incoming!

          Yes. The Old Empire had thinking machine servants. Robots with AGI circuitry. Androids, though they were not called that. And, of course, somebody decided to weaponize them. And, of course, they turned on their masters and became the evil robot overlords of the galaxy. They enslaved the humans and kept them alive largely as a quirk of their programming, but treated them very badly and wantonly experimented on them and killed them whenever it was convenient.

          The evil robot overlords ru

        • Before the prequels, it was originally a reference to Samuel Butler's late 19th century think piece about not letting machines dictate our decisions, and how that would inevitably lead to machine evolution, where the successful machines are the ones that get humans to make more machines and defer to their needs and logic. I don't think it was even explicitly about sentience, just that we'd create our own parasites.

          So Dune had humans doing all the data analyses and making all decisions, with machines at lea
      • These books are horrible. Kevin J. Anderson has been infamous for his shitty writing even before them, and apparently Brian Herbert is no better.

    • Because the only thing the "AI revolution" has going for it is a blind belief in brute force.

  • Well, I suppose we know where the Nvidia gpu shortage is coming from now...
  • Open source hsrdware.

    Fuck you Altman.
  • Sam Altman is a moneyraiser. This isn't an "alert" about a GPU shortage. It's a call for more investor money.

    I don't know him personally but from what I read there's a reason nobody wants to work for him.

    One day when there's a real AI someone will ask its opinion of Sam Altman, and the AI will laugh and laugh and laugh.

    As should we.

    • Re:PR CRAP (Score:4, Insightful)

      by gander666 ( 723553 ) * on Thursday February 27, 2025 @09:01PM (#65200063) Homepage

      Bingo. This reeks of excuses for the poor performance of their newest "model".

      • You know DeepSeek must be something if the US wants to jail people for simply downloading it. https://www.fox29.com/news/dee... [fox29.com]

        • You know DeepSeek must be something

          Indeed, and that something is "Associated with the bogeyman du jure."
          DeepSeek R1's CoT is pretty fucking cool... but only really because it has open weights. It doesn't perform particularly better than the other CoT models.
          MoE does let it crunch less numbers to get answers, but that's not new either.

          • by allo ( 1728082 )

            They didn't invent much new architecture-wise. They trained a good model and they engineered quite a few optimization techniques to do so. No magic involved. I bet we will soon see better models, but currently R1 is ahead of ChatGPT.

            • but currently R1 is ahead of ChatGPT.

              Not really.
              Its performance varies from "a good bit behind o1, to matching it, to microscopically ahead of it".

              The real trick is the computational cost- the MoE model only needs to crunch like 60B something parameters, where o1 needs to crunch like 175B parameters.
              Means R1 is considerably cheaper, computationally.

              Pricing wise, DeepSeek charges inference at like 1/10th the cost OpenAI does o1, so competitively, it's a no brainer.

              • by allo ( 1728082 )

                I didn't look at benchmarks, it is just my experience with the results, mostly for technical questions (IT, math, science related things) that deepseek usually gets what OpenAI misses. It is also nice to see the thinking output (and also use results from there that are not in the final answer) and I am looking forward to a European service hosting it.

                • Are you using o1 for OpenAI? They have many models available for use depending on the way, and how much you pay them.
                  Also, I agree that is sucks that they hide the CoT tokens from you.
                  It's pretty neat to watch R1 reason.
    • Sam is a money raiser aaaand we aren't the target audience. Moonshots, like OpenAI create drama. And people love drama, often mistaking it for reality. People sitting on Big Big money aren't different or smarter than average afaict... often they are deluded with their success or heredity .. who doesn't love a little excitment in their life and the feeling that you got in early on the next big thing? Find some Saudi prince tell him his cousin put 100 million in and create some competition to stoke some jeal
    • This. Altman is one of those Silicon Valley golden boys: smooth-talking, backstabbing sociopaths.

      In the case of OpenAI, as soon as the potential financial success was on the horizon, he started wriggling out of their commitment to "openness" and "not for profit".

      Now, the Chinese have matched OpenAI with a far less expensive model, and other competitors are not far behind. OpenAI may genuinely have problems going forward. If do, look for Sam to take some sort of gold-plated exit.

  • Claude, Gemini are less expensive for more features. Whatever his "out of GPUs" story is, people will go elsewhere
  • by BeaverCleaver ( 673164 ) on Thursday February 27, 2025 @09:26PM (#65200093)

    "This isn't how we want to operate, but it's hard to perfectly predict growth surges that lead to GPU shortages."

    -says CEO of AI company.

    If your AI is so great, shouldn't it be able to look at a trend and make a prediction about the future needs of your business?

  • I had to chuckle, it's the American solution: throw more money (aka GPUs) at it. Wait, wasn't it "work smarter, not harder"?

    Seems like the "AI" "industry" is ripe for disruption. Anyone?

    • Or, by charging more for the more resource-consuming model, they're allowing the market to discourage people from wasting resources. If it's not worth it, people won't pay.

      Really the models need to get smart enough to decide which model to use.

      Google wanted to get AI out there so they do it with almost every search, but that isn't affordable so it gives such horrible results they're just tarnishing their AI image in my opinion. Use it when it's called for, but then get the job done.

      • Do you think you can provide me with one measly example of this huge superior AI you have access to by paying a lot of money or working for an employer that pays a lot of money, because I'm just a plebe who learned from Richard Stallman not to pay for software so I just use the free AIs and they help me out a lot? What am I missing by not buying the latest and greatest? Is it just prestige and stories about how great it is, without actually providing examples, just winking and nodding to others who pay for

        • Depends, what's your baseline? Compared to Amazon Rufus and whatever google is putting into search results, the cheaper paid version of ChatGPT I use is practically godlike and I presume the free one is the same as that.

          The reasoning / deliberative models that take a long time, I have tried somewhat and have found multistep tasks or complex formatting problems on which they do better. But you know what? I don't think people really 'reason' all that often to live, and I normally don't need AI to do much

        • I realized one really useful feature to me on ChatGPT are "Projects." (just the normal $20 paid version, not the super-expensive reasoning models this article is about). I am not sure if projects are avaialble in the free version or not. But you can upload documents about a topic, and it will incorporate all that information into its chats with you in that project. It will remember specific facts about the "project" if told to do so and seamless integrate that knowledge into future answers. I have a
  • When the bubble bursts.
    • How many times has Trump declared bankruptcy again? Do you think the president knows that liquidity kills you quick but solvency doesn't matter except to use cynically as a political tool to press the buttons of people like yourself so you'll follow his agenda while he knows he can spend as he wants on whatever he wants and use the "there's no money left!" excuse to fire whomever he wants?

  • And altman is now tilting at windmills.

  • We, gamers paid to have those chips developed. Now there are none for us. Asshole!
  • Did China build DeepSeek when the US banned them from the best chips?

  • DeepSeek proved the "moat with alligators" around cutting edge AI is actually more like a koi pond. The time from release to cloning is three to six months. Given this new reality, OpenAI is going to charge through the nose knowing that they'll be undercut sooner rather than later. Maybe restricting the rollout will also slow down the cloning, though somehow I doubt it. They might be telling the truth about limited capacity because refusing to shut up and take our money in these critical months seems stupid

  • Finnaly the Ai wasters can taste how it is for the rest of us,I can't wait for ai to fail completely so gpus can finnaly be resna bly priced for the people that wanr to use gpus for actuall graphics ( yea that includes video content creators)
  • Sure seems like these “AI” companies sure spend an awful lot of money, and then charge a lot of money, just to help students badly cheat at their homework.

    I still believe the costs of this crap will eternally outweigh any potential future benefit.
  • We are out of electrons to waste on 'AI'. Such a a waste of electricity, and hardware.

    I see why corporations are dropping the DEI programs, specifically AI goes exactly counter to saving the Environment.
    • by Idzy ( 1549809 )

      We are out of electrons to waste on 'AI'. Such a a waste of electricity, and hardware.

      I see why corporations are dropping the DEI programs, specifically AI goes exactly counter to saving the Environment./quote

      Huh?

      what does Diversity, equity, and inclusion (DEI) have to do with AI and the environment?

  • by nightflameauto ( 6607976 ) on Friday February 28, 2025 @10:01AM (#65200911)

    Someone, at some point, involved in this ridiculous "MORE MORE MORE" game that the current gen AI shit is stuck in needs to ask the question, "Why?" Why are we creating these entities that suck down power like nothing we've ever created before and require hardware stacks that make the supercomputers of yesteryear look quaint? Can they do some impressive, if questionable, things? Sometimes. But it's not like there's anything resembling a hard fact checker coming out of these behemoth machines that's anywhere near accurate. And those of us that code for a living, if we know what we're doing, find them to be questionable at best at code. They're helpful for newcomers as training wheels, sure. And that's not nothing I suppose, but I'm not sure it's worth creating yet more energy demand and sucking up all available advanced hardware for incremental, diminishing returns.

    I know any mention of regulation instantly gets rejected as, "But someone else may beat us to it!" But I have to ask the question: "Beat us to what?" An energy crises? A resource depletion? An non-hire-able population when we "win" the AI race and create the perfect machine based worker bee? A collapsing infrastructure when the entire economy grinds to a halt as people lose spending power because jobs dry up when the AI finally gets good enough? What's the end-goal with this shit? What are all these resources giving us? More dystopia? I think we've been doing fine building that future for ourselves without AI to help speed it along.

    What are we getting for all this obsession with AI? Someone explain it to me in clear cost/benefit fashion. I'd really be curious if someone that's all-aboard the AI train can tell me what the end-goal actually is. Because if it's just about amassing data and wealth at every increasing rates as it appears to be today, I don't know that that's gonna work out well for humanity on the whole.

  • I wonder if the carbon footprint of AI is smaller than it's meatspace equivalent of actual human brains. I can't help but wonder if it would be more cost effective to simply hire teams of grad students to answer questions. At least those aren't in short supply.

  • Fortunately for them, DeepSeek has open-sourced five optimisation techniques they used to train R1 this week. They know how to train models with fewer GPUs.

  • This is an exponential curve, the next 10x and 100x is impossible, in my opinion.

The time spent on any item of the agenda [of a finance committee] will be in inverse proportion to the sum involved. -- C.N. Parkinson

Working...