Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Businesses Technology

OpenAI Seeks New Valuation of Up To $90 Billion in Sale of Existing Shares (wsj.com) 10

OpenAI is talking to investors about a possible share sale that would value the artificial-intelligence startup behind ChatGPT at between $80 billion to $90 billion, almost triple its level earlier this year, WSJ reported Tuesday, citing people familiar with the discussions. From the report: The startup, which is 49% owned by Microsoft, has told investors that it expects to reach $1 billion in revenue this year and generate many billions more in 2024. OpenAI generates revenue mainly by charging individuals for access to a powerful version of ChatGPT and licensing the large language models behind that AI bot to businesses.
This discussion has been archived. No new comments can be posted.

OpenAI Seeks New Valuation of Up To $90 Billion in Sale of Existing Shares

Comments Filter:
  • The valuation has apparently tripled since the last funding in April, but that was the period of maximum hype for OpenAI. While the future still looks good for OpenAI overall, hasn't almost all OpenAI news since then been negative? Reduced user base, reduced accuracy, more competition, etc. I can certainly see why OpenAI would be worth far more today than it was in mid-2022, but being valued higher than April 2023 seems odd.

    • by HBI ( 10338492 )

      They're trying to get funded by the stupid money now. They know that they'll never hit that peak again.

      • they might someday produce something truly useful and get smart investor money.

        Right now it makes buggy code that doesn't include edge cases, hallucinates "facts", writes mediocre predictable story plots, does abysmal math and physics solutions. But that could change.

      • About that valuation...

        Ten days ago, Harvard Business School published a paper titled "Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality".

        The AI peddlers cite the paper to say that overall ChatGPT (4) increases productivity, but if you zoom in, it shows ChatGPT increased "productivity" with tasks like "pen an inspirational memo" but DECREASED productivity on tasks like "analyze an existing business case based on spread

        • by HBI ( 10338492 )

          So, basically, a paper encapsulating the general sentiment around LLMs by people who actually get stuff done. I'll give it a read.

  • Remember back when Musk and others donated $1B to start OpenAI as a non-profit?

    https://news.slashdot.org/stor... [slashdot.org]

  • Holy shit, what are the chances that just as the non-profit registered 501(c)(3) charity OpenAI seeks a $90B valuation, Sam Altman announced that they created AGI internally but are too scared to show anyone so you definitely can't see it?

  • I don't see it (Score:4, Interesting)

    by LostMyBeaver ( 1226054 ) on Tuesday September 26, 2023 @10:50PM (#63880063)
    I just sat in a sales room in China discussing options for buying a massive scale NPU super computer. The company should selling it told me they use it for producing 8 industry specific pre-trained models.

    I asked why they're wasting money on that. They were shocked.

    I explained it's a huge investment for a short-term gain.

    1) Even though the produce the hardware, even design and fabricate the chips themselves, before they even turn the machine on, it already needs to be upgraded. I don't see any possible way in today's market to see a positive ROI on training models themselves, it's a loss-leader. The models cost at least 10x more to produce than you can sell them for. And by the time you make a sale, the buyer will want a new version.

    2) Sooner or later, we will have fed 100% of all digitized human knowledge into a single model. LLM training will become progressive rather than the current one off highly iterative task it currently is. At the current rate of progress, it should happen in 5 years. Google is best positioned to do it because they digitized more books than anyone else and no one else can access that data in a training friendly format.

    3) It won't be a big player who wins. Somewhere right now, maybe in a basement in Ukraine or a shack in Tanzania, there is a group of brilliant kids hacking away in assembly or VHDL designing an inference engine. One of the kids is at a white board figuring out how to mathematically reduce the computational complexity by a fractional exponent. They'll count clock cycles and ditch PyTorch and they'll release it on the world. Over night, we'll go from gigawatts of power fed to inference engines to megawatts.

    This is a terrible time to be in AI.

    The genie is out of the lamp and we're years, not decades from running out of wishes.

If all else fails, lower your standards.

Working...