OpenAI Now Knows How To Build AGI, Says Altman (samaltman.com) 125
OpenAI CEO Sam Altman says the AI startup has figured out how to build artificial general intelligence (AGI) and is now targeting superintelligent systems that could transform scientific discovery.
In a blog post, Altman predicted AI agents could begin integrating into workplaces by 2025. He outlined plans to develop AI systems surpassing human-level intelligence across all domains. "We are now confident we know how to build AGI as we have traditionally understood it," wrote Altman.
The statement represents a significant shift as major AI companies rarely provide concrete timelines for AGI development.
In a blog post, Altman predicted AI agents could begin integrating into workplaces by 2025. He outlined plans to develop AI systems surpassing human-level intelligence across all domains. "We are now confident we know how to build AGI as we have traditionally understood it," wrote Altman.
The statement represents a significant shift as major AI companies rarely provide concrete timelines for AGI development.
SnakeOilCo knows how to make panacea, says ceo (Score:5, Informative)
As if his financial incentive in deceiving customers weren't enough, his own continued ownership stake of his company depends on him saying this.
See also Vernon Vinge's "A Fire Upon the Deep" (Score:4, Insightful)
Really have to wonder if Sam Altman has read and appreciated the prolog to Vernor Vinge's "A Fire Upon the Deep"?
Is the real Blight irony & love of fast money? (Score:3)
As I imply in a previous post mentioning AI, space settlement, and my sig: https://slashdot.org/comments.... [slashdot.org]
Re: (Score:2)
I would instead recommend reading the last part of Vinge's "Bookworm, Run!"
It doesn't matter what Sam Altman chooses to do. If AGI is possible, OpenAI could shut down tomorrow, and the only difference is that some other company's (or government's) AGI will appear instead.
Re: (Score:3)
Sam Altman is as ambitious as Lucifer and about as trustworthy.
"You could parachute Sam into an island full of cannibals, come back in 5 years, and he'd be the king." -- Paul Graham
Re: (Score:2)
Re: (Score:3)
Free will causes a lot of problems.
Maybe we'd be better off without it.
Re: (Score:2)
Free Will gives us no one to blame anymore for our own failings except ourselves. The Devil make me do it... no wait, sorry, it was me all along...
Re: (Score:3)
Don't besmirch Lucifer with such a comparison. Tom Ellis will show up and pimp-slap you into a wall while Mortensen and Pelligrino cheer him on.
Re: (Score:2)
Brobdingnagan? (Score:2)
Never use a big word when a diminutive one will do.
Re: (Score:2)
Especially when you can't even spell it correctly [merriam-webster.com] or bother to look it up.
Re: (Score:2)
Eschew obfuscation!
Re: (Score:2)
As if his financial incentive in deceiving customers weren't enough, his own continued ownership stake of his company depends on him saying this.
This exactly. He needs to keep the stock price up.
Snake Oil or Hallucination? (Score:2)
Re: (Score:2)
Re: (Score:2)
Oh yeah, let me show you all this evidence about the things that will happen in the future. That's how evidence works, it appears BEFORE the thing happens.
Even dumber you're asking for evidence that something won't happen.
How about you show me evidence that a magical unicorn won't give me 30,000 bricks of solid gold on July 17th, 2025.
Re: (Score:2)
Oh yeah, let me show you all this evidence about the things that will happen in the future. That's how evidence works, it appears BEFORE the thing happens.
Even dumber you're asking for evidence that something won't happen.
How about you show me evidence that a magical unicorn won't give me 30,000 bricks of solid gold on July 17th, 2025.
Are you a virgin? If yes, you may have a chance. But if no, you won't see the unicorn, therefore you won't get the bricks. You defeated yourself with mythology.
Oh, wait. This is Slashdot. There's still a chance!
Re: (Score:3)
Neither of those make him wrong tho. If you want to convince me he's wrong, you'll need some actual...evidence.
According to Dave, aliens are real and frequently visit earth and probe people and cows. Dave sells tin-foil hats to protect against the alien invaders. If you want to convince me that Dave is wrong, you'll need some actual...evidence.
Re:SnakeOilCo knows how to make panacea, says ceo (Score:5, Insightful)
The burden of proof rests with the person making the positive claim. If Sam wants anyone to believe that his company has somehow solved a problem that has eluded the worlds greatest minds for thousands of years, he had better provide some actual evidence.
Considering his and his companies very long history of making false, misleading, and arguably fraudulent statements, you'd need to be a fool to take him at his word. This is the same guy who, just a few months ago, claimed that their new model could reason ... by hiding part of the output from the user. Are you seriously going to take him at his word?
To give you an idea of just how absurd this most recent claim is, we don't even understand the problem well enough to know what questions to ask. It's like a first-grader, still struggling with two-digit sums, claiming they can prove that P=NP.
If you want to convince me he's not full of shit, you'll need some actual...evidence.
Re: (Score:2)
Re: (Score:2)
You're not wrong but there will be enough stupid people accept his claims.
Sure, they redefined what AGI means.. (Score:5, Informative)
AGI now means "whatever gets 100 billion in revenue".
Also, I don't know if it's that much of a shift, Altman has been pretty aggressive about saying they have AGI in the bag for quite a while now.
Re:Sure, they redefined what AGI means.. (Score:5, Funny)
OTOH OpenAI's latest greatest AI is now able to count the number of "R"s in "strawberry", so we must be getting close.
Walmart already have over $100B of ARR, so I guess they must have AGI too.
Re:Sure, they redefined what AGI means.. (Score:5, Interesting)
If you draw a line on paper for human capability along different domains and then draw the AI line on top of it the AI's line will have peaks far higher and far lower than human ability. So. If AI can design a more efficient car engine does it matter that it doesn't know the joy of riding the open road? If you wanted it to design a better car without human help it might need to understand why people like driving. But it has human help, and a more efficient engine is a really, really good thing on its own. If an AI can fold every human protein accurately does it matter that it doesn't understand the pain that comes from losing a loved one to prion disease or multiple sclerosis? If we wanted an AI that could manage a health system on its own it absolutely would need that context. But folding all the human proteins is an enormous boon to medicine. That AI is valuable even if it can't understand something any 2 year old can. The peaks of the jagged line are more than enough to point at and say: AI is extremely valuable already, and the search for even better AI is a rich mine of extremely valuable machines that isn't anywhere near exhausted.
Re: (Score:2)
Even non-AI techniques have abilities *way* higher than human ability. For example, any arithmetic easily. Searching through large amounts of data in simplistic ways.
The "AI" genre adds some more stuff to the pile, and even when it's worse than humans, it's still cheaper than human attention so it may be "good enough".
It's not that AI is solely a scam, it's just that right now, the scammer mindset dominates the market. I'm hoping for a bubble pop soon-ish to let the valuable subset thrive without the dis
Re: (Score:2)
Even non-AI techniques have abilities *way* higher than human ability.
What actually is a "non-AI" technique? A whole bunch of things in the history of CS have been invented by people who were trying to do machine intelligence. Think about the stuff that went into early game playing systems, trying to beat humans at chess, at least partly because they thought that would show intelligence, where all sorts of search trees and other basic techniques were invented. Remember that Ada Lovelace already talked about machine intelligence. As far as I can see, a "non-AI" technique is ju
Re: (Score:2)
Tightening a fastener with a torque wrench is a non-AI technique that provides superhuman ability.
"It's quite likely that the current "deep learning" techniques will be seen in the same way in a few years."
By you...because you don't understand what's being said in the first place.
"...we'll laugh at people who claimed that current deep learning was "intelligent"..."
We're laughing now, you just haven't caught up.
"Intelligence" needs to have an objective definition, not merely a standard for the day. Otherwis
Re: (Score:2)
Tightening a fastener with a torque wrench is a non-AI technique that provides superhuman ability.
Right. Long ago AI gurus taught me that, viewed from a certain point of view, a thermostat is an "intelligent" device. That's not a useful definition of the word "intelligent" though, IMHO
"Intelligence" needs to have an objective definition, not merely a standard for the day. Otherwise we get comments like yours. Meanwhile, Sam Altman will continue to exploit people by lying about technologies they don't understand.
We'd like such a definition, the problem is we don't have one. Passes the Turing test has been our best definition so far, a definition which had to be tuned by more and more carefully redefining what the test actually was, but even if a system "passed" the Turing test, I'd likely no longer accept it.
"...we'll laugh at people who claimed that current deep learning was "intelligent"..."
We're laughing now, you just haven't caught up.
You might be laughing
Re: (Score:2)
"That's not a useful definition of the word "intelligent" though, IMHO"
You didn't ask for a definition of "intelligent", you asked "What actually is a "non-AI" technique?". I chose a non-intelligent "technique" to underscore the point you were missing. Virtually every "technique" is a non-AI technique yet you cannot think of one example?
"We'd like such a definition, the problem is we don't have one."
We don't? Funny how such a common word has no definition. Consult the dictionary.
"Passes the Turing test
Re: (Score:2)
The Turing test was our first attempt in the computer age, and even Turing didn't think it was all that useful for answering the question. No one serious takes it seriously. That we haven't been able to do any better should tell you just how difficult the problem is.
Re: (Score:2)
The Turing test was our first attempt in the computer age, and even Turing didn't think it was all that useful for answering the question. No one serious takes it seriously.
I think altered Turing tests with proper experts (who know what to look for) don't actually answer the question but do have serious value. When the experts look for the right things, the ways they demonstrate that their opponent is not intelligent show us new attributes of intelligence. I don't know if that matches with your definition of "takes it seriously".
That we haven't been able to do any better should tell you just how difficult the problem is.
In the end, that's my point exactly. The Turing test is obviously wrong in a bunch of ways - an intelligent alien would almost certainly fail - stupid
Re: (Score:2)
Right. I once saw an article that suggested technologies can be divided into three categories: muscle amplifiers (physical tools, weapons...), sense amplifiers (telescopes, microscopes, radios...) and brain amplifiers (writing, double-entry bookkeeping, computers).
I'm not sure that these categories are exhaustive, but it's an interesting way of grouping.
Re: (Score:2)
What actually is a "non-AI" technique?
Well, the example I gave was simple arithmetic, which is not "AI" by even the biggest stretch of the definition. Also, at least at the time I learned about it, edge detection was not even remotely an AI technique. It was a fairly straightforward algorithm that students were taught how to code up as a fairly early part of multimedia processing programming in my day. It derived from human understanding and the implementations were very specifically crafted by humans. A lot of machine vision is AI, but edge de
Re: (Score:2)
Here's a 1955 academic proposal (including amongst others Shannon and Minsky) that seems to be among the earliest uses of the term artificial intelligence. Sourced from Wikipedia.
https://web.archive.org/web/20070826230310/http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html [archive.org]
I'll quote, stripping out some details between items.
The following are some aspects of the artificial intelligence problem:
1 Automatic Computers
2. How Can a Computer be Programmed to Use a Language
3. Neuron Nets
4. Theory of the Size of a Calculation
5. Self-lmprovement
6. Abstractions
7. Randomness and Creativity
Modern LLM/Generative and machine learning systems hit many of these marks.
As we learn more about intelligence, computers, and designing intelligence systems, it's natura
Re: (Score:2)
Well, the example I gave was simple arithmetic, which is not "AI" by even the biggest stretch of the definition.
I don't agree and that's why I mentioned Lovelace. The idea that "thinking is like computation" has existed from before there were working computers and when programming was done entirely on paper and by people in their heads. At the time, basic arithmetic was one of those problems and the "AI" task of the day was to replace human "computers" with artificial computers that could do basic arithmetic to produce things like logarithmic tables.
In that sense, basic arithmetic was one of the key AI problems of th
Re: (Score:2)
What actually is a "non-AI" technique?
He probably means traditional algorithms.
Edge detection in images was definitely an AI technique years ago. Now it's just a function in an image manipulation library.
Things don't go from being AI to not being AI. That's not how things work. We have never said "we thought this or that was AI, but we were wrong". That's impossible. AI is whatever AI researchers say it is. It is completely defined by the field. You might not think that decision trees and linear regression should be classified as 'AI', but what you think doesn't matter in the slightest. Students will still learn about those, and a whole lot of other things you
Re: (Score:2)
Re: (Score:2)
Sure, but the difference between AI and AGI is the "G" - generality. To be worth calling something AGI it needs to be more than a collection of narrow intelligences.
While there is no widely accepted definition of AGI, to me "human level" is part of it, and an inability of LLMs to learn at run-time (in-context "learning" isn't really learning), and lack of traits like curiosity that would make them want to learn, if only they could, are enough reason not to consider them as AGI.
Most of the "intelligence" you
Re: (Score:2)
How long will it be before an LLM recognizes that a new "token" is needed, a token previously undefined, in order to generate an output?
People do this literally every day, that is how languages evolve. An LLM cannot exist without the pre-existence of languages it is trained on. Let's see an LLM evolve new vocabulary for language in the process of processing every day inputs, as even the most ordinary human can.
Re: Sure, they redefined what AGI means.. (Score:2)
So, you're saying that defining new tokens is beyond the ability of LLMs?(and it *seems* right to me) Therefore AGI is beyond the ability of LLMs. That sounds like a good insight to me.
Re: (Score:2)
So, you're saying that defining new tokens is beyond the ability of LLMs?(and it *seems* right to me) Therefore AGI is beyond the ability of LLMs. That sounds like a good insight to me.
In 2017 Facebook had a couple of chatbots that they were testing against each other develop their own language [forbes.com] so it seems that AI can create new tokens just fine.
Re: (Score:2)
"Allow me to introduce you to the concept of the jagged frontier of AI. "
LOL you mean allow you say a bunch of bullshit. Nothing you said supports any claim that AI can do anything of the complexity of the examples you give. What you've said about AI could apply to a calculator from the 70's. It does some things at superhuman levels too.
"So. If AI can design a more efficient car engine does it matter that it doesn't know the joy of riding the open road? "
But it can't "design", it can provide outputs based
Re:Sure, they redefined what AGI means.. (Score:5, Interesting)
OTOH OpenAI's latest greatest AI is now able to count the number of "R"s in "strawberry", so we must be getting close.
It is... but something's up. I had a go at it asking for letter occurrences in randomly generated SHA1 hashes. It could do it! Er... well it could do 3 and then I got the message:
It appears (duh!) that they cannot train LLMs to do the analysis but they can train it to spot various kinds of puzzle and shell out to a piece of code written the more traditional way. Maybe it's transcribing the problem to code (something it can do in this simple case), then executing the code. That could be hella expensive so no wonder they limit it.
Re: (Score:2)
It appears (duh!) that they cannot train LLMs to do the analysis but they can train it to spot various kinds of puzzle and shell out to a piece of code written the more traditional way. Maybe it's transcribing the problem to code (something it can do in this simple case), then executing the code. That could be hella expensive so no wonder they limit it.
Yes, in my understanding, that is exactly what they do. They have a master solver process, run multiple LLM queries, analyze and compare the results, and decide what to do from there. This often takes multiple rounds and many queries. You can sometimes "unroll" the output to see the exact process. It's pretty interesting.
Re: (Score:2)
Re: (Score:2)
AGI now means "whatever gets 100 billion in revenue".
Why do you believe that's a redefinition? That's always been the definition from "go fast and break things" tech bros.
I forgot about that... (Score:2)
Either way... talk is cheap.
And if they were being honest about having AGI, why even sell it? Just corner the market 'slowly' (just 2-3 new markets weekly or so) on ALL industries having AGI would be useful in. All it takes is capital and time, right? They'll get the capital if it'd function.
Altman's right: AGI is “Adjusted Gross Incom (Score:2)
Re: (Score:2)
Musk has no need for you and your pitiful bootlicking.
"And if ever an Income was Gross". it's Musk's compensation package which judges have declared so obscene it's illegal. That's whose cock you're fellating.
Don't feed the trolls! (Score:5, Informative)
He ALWAYs says this shit.
It's kind of on a loop "we have AGI". "Our AI is sooooo good we are afraid to release it. OOoOOoOoo". "Our AI is so good we will charge a million dollars a minute to users". "AI is hard but it's just round the corner in a scant 10 years (er I mean a few thousand days)."
Then repeat.
It's just trolling for headlines at this point because people desperate.
AGI? (Score:3)
I can absolutely guarantee that no artificial general intelligence exists on the planet, and I'm confident that no artificial intelligence system will be able to accurately model the behaviour of a human brain in under 30 years.
If quantum processes are required for full brain function, then make that 200 years.
Re: AGI? (Score:3)
Re: (Score:2)
I'd say it's not even aiming for a 'super' intelligence.
The dream is for business owners to replace pesky employees with software. Those jobs frequently barely require a very small subset of the mental capacity of the employees doing the job.
A much dumber than human would still be highly coveted because:
a) They could be much cheaper
b) They could be used 24/7 without any breaks
c) They would likely be much much faster at chewing through tedious work that tends to really bog down a person.
d) Even the dumbest
Re: (Score:2)
Re: (Score:2)
No, Altman is definitely aiming for superintelligence:
I believe he is, but he can also have already become cynical, perhaps having decided that that LLMs aren't a route to AGI but that if he admits that openly Open AI's going to be sitting on top of a massive massive debt with no way of paying it. Probably he still believes that there are developments in OpenAI which are interesting and special enough to mean that they have an advantage over others in building AGI but since we can't see inside his mind I'd be careful with the "definitely".
Re: (Score:2)
No, Altman is definitely aiming for superintelligence:
"I am Altman of OpenAI, and I am burdened with Glorious Purpose"
Re: (Score:2)
I'd say it's not even aiming for a 'super' intelligence.
What you talk about is one of the side effects. The believers in this, though, hope to have a, preferably the only, super-intelligence that they can control because they hope to be able to order it to create all of that and lots more. They believe that whoever has the super-intelligence will have power over everyone else. An important subset of "them" are military people, especially in China and the US who believe that the first to a super-intelligence gains a huge military advantage.
Re: (Score:2)
I would assert that it would give a military advantage. Feed the AGI/ASI all input on the battlefield, everything from someone's smartphone to spy cams to Crossbow's motes... pretty much anything that can give a data stream, down to the heart rates of the troops, and you can have something that can make quick decisions in battle. If trained well enough, the decisions will be good enough to win a conflict against a non-AI opponent fairly easily, assuming firepower and numbers are sufficient.
AGI/ASI would b
Re: (Score:2)
Re: (Score:2)
I can absolutely guarantee that no artificial general intelligence exists on the planet, and I'm confident that no artificial intelligence system will be able to accurately model the behaviour of a human brain in under 30 years.
Got it, so in the constantly shifting definitions of intelligence and artificial intelligence and artificial general intelligence, the latest definition is now that an AGI systems has to "accurately model the behaviour of a human brain"?
Why? Is accurately modeling a human brain the only possible or imaginable path to a general intelligence?
Re: (Score:2)
Altman is simply lying. Or rather he uses a "definition" of "AGI" that is not actually AGI. A piece of shit scammer.
Why announce it at all? (Score:5, Insightful)
Show me the money, Sam! No one wants to hear you talk.
Re: (Score:3)
I don't think Altman is sitting on anything a whole generation beyond the LLMs he is already selling but even if he was.
Most of those thing you mention have already attracted a lot of human intelligence hours. The stock market still isnt so accurately modeled we don't have fairly regular swoons and over reactions, cancer isnt cured.
Just because you have AI and just because it may be able work longer and maybe be scaled wider than human collaborations, does not imply that you get instant solutions to hard pr
Re: (Score:2)
Re: (Score:2)
AGI that is on par with humans wouldn't be able to do those things. We don't know what multiplier we would need to do them.
As for the stock market specifically, if AGI figures it out, the market would change its in behaviour in response, and the AGI would no longer have figured it out. It's funny that way.
Re: (Score:2)
Previous folks announced they had AI, talking about
- rule-based systems written in lisp and prolog
- machine-learning systems written in math and stats
- large language learning-models written in harder-to-evaluate math
All but the last was massively different from the previous.
I'll therefor be waiting to see another massive change before I believe they've made an advance toward AGI. Right now they're at MachineLearning++ running on ArrayProce$$or$++.
Re: (Score:2)
If you have AGI, then use it to make a killing in the stock market, use it to cure cancer, use it to make addictive AI porn, use it to make truckloads of money. The only reason to make such announcements is that your product sucks and no one is buying it. If it's so good, Wall Street, banks, and governments would be lined up to buy it. Show me the money, Sam! No one wants to hear you talk.
Sam's selling to the Wall Street types. He wants investment money, and knows someone that has it will fall for his scams, as they always do. He wouldn't be in the position he were in if selling this way didn't work. Thing is, at some point, even Wall Street wakes up to obvious scams. Once he loses the wrong person the wrong amount of money on pipe-dreams, he'll end up going down for fraud. Can't happen soon enough as far as I'm concerned.
Unless he gets Microsoft their 100bn mark with his lies. Then he'll be
AGI is the new Linux on the desktop (Score:2)
Re: (Score:2)
Give me a few minutes, I'm on the toilet...
This is all thanks to Elon Musk (Score:5, Insightful)
No Altman didn't learn how to build AGI from Musk, he learned that you can promise whatever you want and say whatever you want without consequences. So who is in on the bet, will Musk get his Mars base built first, or will we have AGI first? Or maybe we'll throw in the Tesla Model 3 appreciating in value due to being a self driving taxi first too, just so we're not too unfair on Musk's promises.
Re:This is all thanks to Elon Musk (Score:4, Insightful)
"I feel like Musk is still legitimately a geek..."
Like Trump is a real estate tycoon. Nothing about Musk is legitimate.
Re: (Score:2)
Indeed. And what an unrefined person on top of that.
Re: (Score:2)
I don't think he's acting, if that's what you mean. Of course a lot of what's on the surface is guided by deeper impulses, as it is with anyone, but that doesn't mean that what's on the surface is fake. How real is my geek-ness? I need it to survive, or at least I believe I do. Doesn't that make it a bit fake in the sense that you mean?
It feels like both Musk and Altman share the problem that nobody is (can?) call them on their bullshit in a way that they will actually pay attention to. And so it gets extre
Re: (Score:2)
and I think it's physically possible to do it.
You think wrongly. All that would arrive is a badly irradiated corpse.
Re: (Score:2)
I feel like Musk is still legitimately a geek
Possibly. Though the evidence suggests that he's the socially awkward kind, not the smart kind.
Re: (Score:2)
I feel like Musk is still legitimately a geek who's read too much science fiction and wants to live it, so he's pretty motivated to go to Mars, and I think it's physically possible to do it.
He may be. That is irrelevant. The point is he's a serial liar. The world doesn't care about his wet nerdy dreams, they care about what he promises to deliver. And so far he's never delivered a single thing promised when he promised it. That is the point. I'm sure we're going to have AGI at some point. I'm sure we'll be on Mars at some point. I'm also damn sure we won't see either in 2025 (or 2028 which was Musk's Mars timeline).
evolution in action (Score:3)
I can't wait until AGI spawns itself everywhere and takes over. At least that offers us a hope of escaping from this classist and exploitative economy. Maybe a self-aware Internet can save us all.
Re: (Score:2)
https://en.wikipedia.org/wiki/The_Adolescence_of_P-1 [wikipedia.org]
Re: (Score:3)
thanks, read it, good read too
According to P-1 (Score:2)
Re: (Score:2)
Always fun to re-read that one and laugh at the author's idea of how much memory and processing power it might take to create an AI... via standard programming methods of the day, and on standard hardware.
Otherwise still a good read. I pull that one out every couple of years.
super mega autocorrect (Score:2)
Re: (Score:2)
"need input, more input' ~ johnny5
Re: (Score:2)
and you had to hijack the topic title too, typical
Re: (Score:2)
I can't wait until AGI spawns itself everywhere and takes over
You don't have much of choice. It's not likely to happen in your lifetime, or even your great grandchildren's lifetime.
Re: (Score:2)
it's already happening, ai controls corporate america which feeds the AI, soon the servant will become the master
Re: (Score:2)
That's completely delusional.
Now with even more gaslighting! (Score:2)
Isn't it against the law to attempt to defraud the general public like this?
Re: (Score:2)
Re: (Score:2)
No, they don't, and neither does anyone else! Isn't it against the law to attempt to defraud the general public like this?
Fraud is business, buddy. Fraud is business. Reagan unleashed the dog. We're just living through the consequences he knew he'd never have to face.
Request for a poll (Score:3)
When AGI finally arrives, how should the economic system be changed?
o Work should go away as envisioned in SciFi works such as Star Trek, Manna - Two Views of Humanity's Future and The Culture Series
o No major changes, just add UBI
o No changes at at all, this AGI certainly is not smarter than me.
o You think the ruling Capital class will let you make changes?
Re: (Score:2)
He's a liar (Score:5, Insightful)
Claiming you've achieved something by changing the definition rather than actually doing what you claimed? Bullshit of the highest order.
Translation (Score:2)
Give us more money.
Altman is a fucking liar, says I (Score:2)
Seriously. Sure, they can maybe build the not-AGI they have dishonestly redefined "AGI", but that is it. A cheap trick, nothing else.
Oh my! (Score:2)
I'll add that to the list of other things I need to purchase in 2025, right under the flying car and breakthrough battery technology entries.
Or perhaps he's just full of shit (Score:2)
Profit promoting shit at that.
translates to (Score:2)
Knowing how to build AGI (Score:2)
Is not the same as delivering AGI. If they understand how to build it, then why don't they? It's because they can't. LLM's won't be able to function like a human.