

What is AGI? Nobody Agrees, And It's Tearing Microsoft and OpenAI Apart. (arstechnica.com) 61
Microsoft and OpenAI are locked in acrimonious negotiations partly because they cannot agree on what artificial general intelligence means, despite having written the term into a contract worth over $13 billion, according to The Wall Street Journal.
One definition reportedly agreed upon by the companies sets the AGI threshold at when AI generates $100 billion in profits. Under their partnership agreement, OpenAI can limit Microsoft's access to future technology once it achieves AGI. OpenAI executives believe they are close to declaring AGI, while Microsoft CEO Satya Nadella called using AGI as a self-proclaimed milestone "nonsensical benchmark hacking" on the Dwarkesh Patel podcast in February.
One definition reportedly agreed upon by the companies sets the AGI threshold at when AI generates $100 billion in profits. Under their partnership agreement, OpenAI can limit Microsoft's access to future technology once it achieves AGI. OpenAI executives believe they are close to declaring AGI, while Microsoft CEO Satya Nadella called using AGI as a self-proclaimed milestone "nonsensical benchmark hacking" on the Dwarkesh Patel podcast in February.
Dramatic Headline? (Score:5, Insightful)
First of all, it is not tearing either company apart.
Second, Microsoft is looking for business. Use. Case. Asking the hard questions and coming to the conclusion that Open AI is full of crap for most stuff.
Open AI otoh desperately needs to get under a corporate umbrella.
Microsoft will dictate the terms, OpenAI needs to save face, and MS knows the longer these âoenegotiations âoe go on, the cheaper Open AI will be. The emperor never had any clothes and MS got to have a peep show to see the truth.
Re: (Score:2)
Let's hope Microsoft isn't buying OpenAI. OpenAI is dangerous at its size as a new bit tech company, but Microsoft owning it will be way worse.
Re: (Score:2)
Microsoft buying OpenAI only gets a slightly larger customer base. They already have a similar level of technology, their own training corpus (Bing!) and so on.
OpenAI does not dictate AGI (Score:3)
I bet OpenAI is realizing they've hit some bump in achieving actual AGI.
If they don't reach it, does Microsoft essentially come away with a perpetual license for all OpenAI stuff? That doesn't seem fair, but maybe it's binding?
The definition of AGI aside, seems like an interesting court case.
WTF does AGI have to do with generating profits? (Score:2)
-Something that can learn and build compressed models and reason in arbitrary new domains.
-Something that can use analogy / isomorphism to extend knowledge gained in one domain or situation or task to another.
-Something that can build over time, both specific episodic memories and generalized models including situation models, and including mathematics, in multiple domains, and can build associative memory which includes information both on specific domains and thei
Re: (Score:2)
Not so easy. There's a saying, "if you're so smart, why aren't you rich?".
It's quite reasonable to ask AI-toting blowhards to put their money where their mouth is. In this case, if their "AGI" can't make as little as 100B in one year, then their other claims about being superior to humans in science and technology are clearly suspect too. You might say it's hard, but we already have one example of a human making 100B in a year, and he's definitely no genius.
Re: Easy peasy (Score:2)
Re: (Score:2)
Re: OpenAI does not dictate AGI (Score:2)
Re: (Score:3)
AGI is not jargon. It's to distinguish an important milestone from the original term, "AI," that has become a marketing gimmick as opposed to the technical term.
It's kind of how European scholars have to distinguish liberalism from classical liberalism because US marketers ambiguated the original term.
None of this is AI, and LLM natural language prompting is a loser of an application.
There are lots of better uses for large scale pattern recognition, and once we get over how human a LLM actually sounds as a
Re: (Score:2, Troll)
AGI is not jargon.
What? Yes, of course it is.
It's to distinguish an important milestone from the original term, "AI," that has become a marketing gimmick as opposed to the technical term.
Yes, it's a technical term used for technical reasons, which is why it's jargon.
Re: (Score:2, Troll)
AGI is most certainly jargon. AI first began development in the 1950s and many goals of AI have been achieved. John McCarthy (of the Stanford AI Lab, and one of the field's founders) used to joke that once something was achieved by AI, it wasn't considered AI anymore.
Beating a chess master was once thought unthinkable but it happened almost 30 years ago. Similarly, many key AI focus areas, such as natural language processing and machine translation have made enormous strides just in the last two decades. Of
Re: (Score:2)
Beating a chess master was once thought unthinkable but it happened almost 30 years ago.
That's because we used to think that playing chess well required a sentient human-level intelligence. It turns out that no, you just need a good algorithm and a boatload of data. It's not that the goal posts for AI have moved, it's that our understanding of the problem domain has changed. Chess just isn't a good litmus test for general intelligence.
I don't think we really have a good scientific understanding of what constitutes "general intelligence", but we know it when we see it, and I think most peopl
Re: (Score:2)
Plenty of people do really stupi
Re: (Score:2)
John McCarthy (of the Stanford AI Lab, and one of the field's founders) used to joke that once something was achieved by AI, it wasn't considered AI anymore.
That's more than a joke. It's an actual valid result and is a key part of the science of AI. We don't know what intelligence is, but we do know a bunch of features of intelligent systems like people, cats, crows and octopuses so we create something new and we see if we can replicate those intelligent systems.
So far, each different system we created was not intelligent in fundamental explainable ways. Since it wasn't intelligent, we know that whatever technique we just tried is not, in itself, a recipe for i
It's ... (Score:3)
Re: (Score:2)
Yeah *that's* an AI product worth paying money for!
The Quiet Part Loud (Score:2)
One definition reportedly agreed upon by the companies sets the AGI threshold at when AI generates $100 billion in profits.
Wow, this is a little too on the nose, isn't it?
Re: (Score:2)
Re: (Score:2)
Yes. I cannot see how making money from something corresponds to its quality.
In a society that has decided the only purpose of human existence is the development of an the acquisition of profit, I can see why some would want to define all things based on profit definitions. I don't think it's a correct view of things, but it certainly seems to be the one our society would value.
Re: (Score:2)
If they ever do achieve AGI, which I very much doubt they will, making money off it will constitute slavery.
Re: (Score:2)
If they ever do achieve AGI, which I very much doubt they will, making money off it will constitute slavery.
I doubt very much that the owners will care, so long as the public doesn't get outraged over it.
Re: (Score:2)
It corresponds to our national religion, prosperity theology.
Re: (Score:2)
You're confusing motivation with intelligence. They are separate things.
OTOH, the definition given is silly.
An AGI would be something that could learn anything. Such a thing is probably impossible. Certainly people don't meet that measure.
Re: The Quiet Part Loud (Score:2)
Re: (Score:2)
Yeah no kidding! It's hilariously dark
Whose accountants? (Score:2)
Re: (Score:2)
Who Is This Guy? (Score:1)
I'd never heard of Dwarkesh Patel before seeing the Satya Nadella interview. But, he's done podcasts with some big names. Who is he and what grants him access to these people?
Re: (Score:2)
Who is he and what grants him access to these people?
Only one thing matters, active subs, granting access to eyeballs (and earballs, I guess.)
Income as IQ? (Score:2, Insightful)
Deciding that "general intelligence" can be determined by how many billions in profits something makes is why some idiots think Elon Musk is smart.
Vagueness is a feature (Score:2)
20 years ago AI was defined as ... (Score:1)
... whatever is 10 years in the future, i.e. something that will never be achieved because the goalposts are always moving.
Why do we care? (Score:2)
Since there's no generally accepted definition or concept for AGI, why do we care? Different companies, research groups, etc. are working in different AI fields and on different use cases, and very few of them work on AGI, whatever that might mean.
Perhaps the one practical definition of AGI is the concept associated with AI that is intended as clickbait.
It's fraud. (Score:2)
Don't get me wrong, what exists now can already be used as an unreliable tool, but calling it Intelligence is like claiming Frozen Dairy Dessert is Ice Cream.
Re: (Score:2)
"Vegan Ice Cream" is a pet peeve of mine because Cream is not Vegan.
No, it's not. (Score:2)
What is tearing Microsoft and OpenAI apart is that they lack a formal definition of AGI in their contract. The result is that OpenAI can declare anything to be AGI and give Microsoft the boot at a moment's notice. The source of conflict is purely contractual, not ideological.
AGI definition (Score:4, Funny)
Re: (Score:2)
AND at that point it needs to pay taxes too.
M$ v. OpenAI: Custody of the Ghost in the Machine (Score:2)
Jesus fucking christ. “AGI” used to mean generalization without retraining. Now it means $100 billion in revenue. That shift alone should terrify you more than any sci-fi doomsday scenario. OpenAI and Microsoft are in a knife fight over a clause that says, once AGI is achieved, OpenAI can withhold tech from Microsoft. Sounds fair—except no one agrees on what AGI is. So they pinned it to profit. That’s right: AGI is now defined not by cognition, or consciousness, or autonomy
Re: (Score:2)
It's about money (Score:2)
Microsoft would be more believable if they knew (Score:2)
$100 billion isn't much anymore (Score:2)
AGI is the mcguffin (Score:2)
Defining Human Ignorance. (Score:2)
One definition reportedly agreed upon by the companies sets the AGI threshold at when AI generates $100 billion in profits.
Great. So when Al-Sex-A the Amazing Analbot hits a billion in sales all spanks in part to the AI-enhanced chat-sex-bot that helped promote the marketing, humanity will magically be gifted with the almighty AGI based on this promotional definition.
Leave it to the race wholly infected with the Disease of Greed to reduce a crowning achievement in technology down to a fucking number in the bottom right hand corner of some fucking spreadsheet locked in the bottom drawer of a file cabinet in the basement with a
Turing Test? (Score:2)
Lock a number of your "AI" agents and actual people inside impenetrable boxes. Then, make them answer questions directed at them from the audience. Real-life, unusual, incomplete, quirky questions from across all walks of life. If after enough iterations you can't tell the difference between real people and AI, you've achieved AGI.
Re: (Score:2)
Re: Turing Test? (Score:2)
That's why I said "a number". A diversified set.
We'd also give them logical tests, not knowledge tests.
Knowledge can often pass as intelligence. It's not. Intelligence is about independently solving new, unknown problems.
Re: Turing Test? (Score:2)
Capable of answering "Don't Know" (Score:2)
Re: (Score:2)
Re: (Score:2)
It's a statistical engine and it hasn't been trained that "Don't Know" is the correct answer when its statistics fail it (that would require training it to answer Don't Know more than anything else, and for every possible question it hasn't got data for, which is why it hasn't happened).
Instead it hallucinates based on some spurious things being tiny fractions of a percent "more likely" by some statistical correlation.
These things are just statistical boxes, it has no way to do anything else. And it's inab
Re: (Score:1)
Mm, rarely, but I got "I don't know" sort of answers from LLMs.
AGI (Score:2)
We can't agree on what it is but we should all be able to agree on one thing:
We don't have it.