

Pre-Product AI 'Company' Now Valued at $30 Billion 44
Financial Times: Venture capitalists have always been happy to back pre-profit companies. Back in the halcyon ZIRP era, they became happy to finance pre-revenue companies. But at least even Juicero, Wag and the Fyre Festival had an actual product. From Bloomberg over the weekend: "OpenAI co-founder Ilya Sutskever is raising more than $1 billion for his start-up at a valuation of over $30 billion, according to a person familiar with the matter -- vaulting the nascent venture into the ranks of the world's most valuable private technology companies.
Greenoaks Capital Partners, a San Francisco-based venture capital firm, is leading the deal for the start-up, Safe Superintelligence, and plans to invest $500 million, said the person, who asked not to be identified discussing private information. Greenoaks is also an investor in AI companies Scale AI and Databricks.
The round marks a significant valuation jump from the $5 billion that Sutskever's company was worth before, according to Reuters, which earlier reported some details of the new funding. The financing talks are ongoing and the details could still change."
OK, so a jump from a $5bn valuation less than half a year ago to $30bn must mean that Safe Superintelligence has an absolutely killer product right? SSI focuses on developing safe AI systems. It isn't generating revenue yet and doesn't intend to sell AI products in the near future. "This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then," Sutskever told Bloomberg in June. "It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race."
Greenoaks Capital Partners, a San Francisco-based venture capital firm, is leading the deal for the start-up, Safe Superintelligence, and plans to invest $500 million, said the person, who asked not to be identified discussing private information. Greenoaks is also an investor in AI companies Scale AI and Databricks.
The round marks a significant valuation jump from the $5 billion that Sutskever's company was worth before, according to Reuters, which earlier reported some details of the new funding. The financing talks are ongoing and the details could still change."
OK, so a jump from a $5bn valuation less than half a year ago to $30bn must mean that Safe Superintelligence has an absolutely killer product right? SSI focuses on developing safe AI systems. It isn't generating revenue yet and doesn't intend to sell AI products in the near future. "This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then," Sutskever told Bloomberg in June. "It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race."
" An undertaking of great advantage" (Score:3)
"but nobody to know what it is"
So how much value for a pre company company? (Score:2)
Re: (Score:2, Insightful)
Re: (Score:2, Funny)
AI has no worth
Truly, you are a stupid person.
Re: So how much value for a pre company company? (Score:2)
It is not generating revenue yet and doesn't intend to sell AI products in the near future.
You mean like the people who invested lots of cash into a company that cannot tell others what they actually do. Sounds like unelected leon would fire them.
Re: (Score:2)
Re: (Score:2)
I don't need to quote you because you made a statement that was pointless and 6 words long. Therefore, I skipped wasting my time on your incredulous nonsense. You would understand that if you could read the first 5 fucking words I wrote.
You mean like the people
referring to your stupid person statement. Go fuck yourself!
Re: (Score:2)
You are, indeed, stupid people.
Re: (Score:2)
Chatbots, the thing you and other laypersons think of as 'AI', has no real worth.
Truly, you are a stupid person.
"Large Language Model (LLM) Market Revenue to Soar to USD 82.1 Bn by 2033 | Led by North America's 32.7% Share. By 2033, the Large Language Model market is forecasted to reach USD 82.1B, up from USD 6.0B in 2024, reflecting a 33.7% growth rate."
Their stochastic nature precludes them from ever becoming reliable.
This is an idiotic take.
They're universal function approximators. Their stochastic nature prevents literally nothing.
If you knew anything about them at all, you would know this.
I know quite a bit at this junction (I first began experimented on developing my own transformers and training my ow
Re: (Score:1)
Re: (Score:2)
Could be worse. (Score:1)
Musk is targeting a 75B valuation for x.AI. Now, x.AI has "a product" (but with tiny revenue proportional to costs) and a couple billion dollars in datacentres, but most of the value is purely speculative. And if you're going to speculate, if the choice is Sutskever vs. Musk, I'm putting money on Sutskever every time. Musk has ridiculous ideas about how AI development works (and is very pro-censorship [bsky.app], though that's a side point), whereas Sustskever is arguably one of the most knowledgeable and credentia
Re: (Score:2)
the only thing he ever seems to think about is scale.
Which is frankly weird.
From a mathematical perspective, the only thing holding transformers back are the insufficiencies in how they're trained.
Throwing more tokens are them will make them smarter, but it won't fix the problems.
Fortunately, there are really fucking smart people trying to figure out smarter ways to evolve these nets, and even improving transformers in general, though frankly- from an architectural perspective, the transformer is universal and complete- you're only working on performance
Re: (Score:3)
I mean, it does kinda, but very inefficiently. LLMs tend to learn the "easy" solutions early and these easy solutions tend to dominate the weights, but harder solutions that help generalize problems better take exponentially more training time (grokking) for the models to learn. So while more compute is "a" solution, it's neither a "good" solution nor a "practical" solution.
An example might be: " 'The schoolbus passed the racecar because it was going so fast.' - which was going fast, the schoolbus or the
Re: (Score:2)
I mean, it does kinda, but very inefficiently. LLMs tend to learn the "easy" solutions early and these easy solutions tend to dominate the weights, but harder solutions that help generalize problems better take exponentially more training time (grokking) for the models to learn. So while more compute is "a" solution, it's neither a "good" solution nor a "practical" solution.
They need better training.
We currently train them in very-close-to-brute-force methods.
Backprop is more efficient than flat brute force, but it's still terribly inefficient and prone to mistraining.
An example might be: " 'The schoolbus passed the racecar because it was going so fast.' - which was going fast, the schoolbus or the racecar?". The "easy solution" is "schoolbuses are slow, racecars are fast, so it's the racecar". The harder solution is the multiple steps of parsing the word order of the sentence to understand that the sentence is actually counterintuitive. Models will tend to learn the first solution really early, but take a LOT longer to learn the latter solution. One of the key areas of focus is on accelerating the process of learning "hard" solutions, with things like multi-token prediction, intermediate-token prediction, non-token-based models, lognormal weights, and so forth.
The length of time to learn is a function of the training methodology, is what I'm saying.
We can't really generalize on tendencies to learn a thing, except in the context of a particular training methodology.
With LLMs, the balance between natural and human selection is still too close to natural. It's too mu
Re: (Score:1)
Your deep ignorance is showing ... again.
They need better training.
We currently train them in very-close-to-brute-force methods.
Backprop is more efficient than flat brute force, but it's still terribly inefficient and prone to mistraining.
"Flat brute force" methods for training a neural network? Do you think we try every possible combination of weights until we find something suitable? Do you have any idea how absurd that would be? Also, backprop is nothing at all like 'brute force'. Yes, it has its share of problems (none of which you mention) but inefficiency isn't one of them. If you think you can do better, I look forward to your paper.
With LLMs, the balance between natural and human selection is still too close to natural. It's too much try, check fitness, roll back. [...] The fitness function can't really detect such a thing evolving
Um... No one actually trains a NN with an evolutionary a
Re: (Score:2)
Your deep ignorance is showing ... again.
Dude, shut up. You have demonstrated you have so little fucking idea what you're talking about that you're a meme at this point.
I'd be happy to start listing your idiotic takes if you've forgotten them.
"Flat brute force" methods for training a neural network?
Yes. Random change, run fitness function, roll back or make another random change.
Do you think we try every possible combination of weights until we find something suitable?
There is no "we" here, dipshit.
Tell me again how NNs are "combinational logic, which is a type of automata".
Tell me how feed-forward networks are "very limited in what they can do".
Tell me again how "we don't use quantized m
Re: (Score:2)
Also- fuck- I forgot this great one. Tell me again about how there is no state retained between tokens. (tell me again you have no idea what a transformer is)
It Might Work... (Score:4, Interesting)
The standard model is to start with a (minimal) product and improve its "market fit" by acquiring customers and iterating. There's a very good reason why that's the standard model: history has shown that if you just take a ton of cash and cloister yourself to build the "perfect product", at the end you're likely to have a "perfect product" that no one actually wants to buy.
That being said ... venture funding is already one step up from gambling, and even with the standard model something like one in ten startups fail. Also, building AI company does require substantial up-front investment.
So yeah, this is probably a terrible idea, but like I said, so are most venture investments. It probably has as much chance of success as any other startup, with a much bigger payoff if it does succeed.
Re: (Score:3)
That being said ... venture funding is already one step up from gambling, and even with the standard model something like one in ten startups fail. Also, building AI company does require substantial up-front investment.
I'm pretty sure that it's one in ten startups succeed and not "fail". Back in the 2000s I changed jobs to join a startup that did succeed and got bought out by a Fortune 500 company and it was a pretty interesting experience to be part of a successful startup, even if joining late in the "startup" period. I can't believe most startups make it. After we got bought out, all our top management left by choice. The acquiring company was willing to give them jobs, but they chose to move on. Shortly af
Price != Value (Score:2)
Re: Price != Value (Score:2)
I worked as a tech sector analyst at a small securities firm. In this scenario, we would say that you are investing in the management team.
Re: (Score:2)
In economic theory, the "value" of a thing is the price at which a buyer and seller agree. By that definition, the buyers of the stock do indeed value the company at the stated amount, because that is what they are willing to pay for the shares.
Short (Score:2)
Cannot short private equity.
That is all you need to know.
This article is shit (Score:2)
* destroy humanity
* is not controlled by oligarchs
* does not lead to ultra-late-capitalist dystopia
And all you can do is complain it has no current profits. Almost makes me think y'all deserve a dystopian AI outcome.
Re: (Score:3)
Almost makes me think y'all deserve a dystopian AI outcome.
If that isn't abundantly clear to you, sir, madame, or other- I'm afraid you haven't been paying attention :)
SSI, a division of Lumon Industries (Score:2)
Re: SSI, a division of Lumon Industries (Score:2)
Re: (Score:1)
I definitely miss Strategic Simulations, Inc.
Maybe good, maybe not (Score:2)
"This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then," Sutskever told Bloomberg in June. "It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race."
That reminds me of a crazily well-funded skunkworks - minus the corporation-that-actually-makes-and-sells-stuff which would normally host / fund such a process. From that standpoint it might be good. Also, not having to respond to competitors' products and get a jump on the market may mean more thorough, buttoned-down, maintainable code.
OTOH, the traditional skunkworks has a flying-under-the-radar feel that lends some urgency to the work. The kind of funding described in TFS might be the recipe for a bloate
Bubbling bubble is bubbly (Score:3)
And here we go with another tech bubble. Overvaluation is going to be a problem if this is any indication.
The good news for all the investors, IMHO, is that the bubble will not burst for a while. It might even take an advanced AI to figure out that AI is not worth this much, because people tend to do this dumb stuff all the time.
But if you have a position in AI, learn to read tea leaves, because collapse is coming and you will want to pull out before the earthshattering "POP!" I dunno, maybe 5 years from now?
Literally every player in the AI market is over their skis on this, I think.
Re: (Score:2)
It's going to happen a lot sooner. We're currently in the "a little bit at first" phase and are rapidly approaching the "then all at once" phase. Doesn't help that there's some uncertainty in the markets due to specific external factors right now.
Re: (Score:2)
Re: (Score:2)
And here we go with another tech bubble. Overvaluation is going to be a problem if this is any indication.
The good news for all the investors, IMHO, is that the bubble will not burst for a while. It might even take an advanced AI to figure out that AI is not worth this much, because people tend to do this dumb stuff all the time.
But if you have a position in AI, learn to read tea leaves, because collapse is coming and you will want to pull out before the earthshattering "POP!" I dunno, maybe 5 years from now?
Literally every player in the AI market is over their skis on this, I think.
The pop might happen sooner. Anyhow, anyone have any pets.com stock? At this point, call it PetsAI dot com, and you can offload it for a billion or so.
Doesnt seem nonsensical (Score:2)
* Im not sure that position makes any sense as AI becomes cheaper but this seems to be the theory driving valuations of dozens of major tech companies the last year or so.
Re: (Score:2)
I would argue that "Safe AI" and "Superintelligence" are both novel and untested ideas. There is no precedent for "safe" AI, and there isn't even an actual definition of "super" intelligence.
Re: (Score:2)
"Super intelligence" is reasonably easy to define, even if we don't know what it might look like.
It's "Safe AI" that's the tricky one.
"Super" intelligence? (Score:2)
What in the world is that?
Whatever the marketers want it to mean, I suppose.
Re: (Score:2)
I'm sure someone has said this before, (Score:2)
And the AI bubble Grows (Score:2)
Gonna burst some day, my friends. Perhaps this best evah company can buy up some of the AI center leases Microsoft just dropped out of?