Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
AI Microsoft

What is AGI? Nobody Agrees, And It's Tearing Microsoft and OpenAI Apart. (arstechnica.com) 61

Microsoft and OpenAI are locked in acrimonious negotiations partly because they cannot agree on what artificial general intelligence means, despite having written the term into a contract worth over $13 billion, according to The Wall Street Journal.

One definition reportedly agreed upon by the companies sets the AGI threshold at when AI generates $100 billion in profits. Under their partnership agreement, OpenAI can limit Microsoft's access to future technology once it achieves AGI. OpenAI executives believe they are close to declaring AGI, while Microsoft CEO Satya Nadella called using AGI as a self-proclaimed milestone "nonsensical benchmark hacking" on the Dwarkesh Patel podcast in February.

What is AGI? Nobody Agrees, And It's Tearing Microsoft and OpenAI Apart.

Comments Filter:
  • Dramatic Headline? (Score:5, Insightful)

    by Dripdry ( 1062282 ) on Tuesday July 08, 2025 @02:09PM (#65505758) Journal

    First of all, it is not tearing either company apart.
    Second, Microsoft is looking for business. Use. Case. Asking the hard questions and coming to the conclusion that Open AI is full of crap for most stuff.

    Open AI otoh desperately needs to get under a corporate umbrella.

    Microsoft will dictate the terms, OpenAI needs to save face, and MS knows the longer these âoenegotiations âoe go on, the cheaper Open AI will be. The emperor never had any clothes and MS got to have a peep show to see the truth.

    • by allo ( 1728082 )

      Let's hope Microsoft isn't buying OpenAI. OpenAI is dangerous at its size as a new bit tech company, but Microsoft owning it will be way worse.

      • Microsoft buying OpenAI only gets a slightly larger customer base. They already have a similar level of technology, their own training corpus (Bing!) and so on.

  • I bet OpenAI is realizing they've hit some bump in achieving actual AGI.

    If they don't reach it, does Microsoft essentially come away with a perpetual license for all OpenAI stuff? That doesn't seem fair, but maybe it's binding?

    The definition of AGI aside, seems like an interesting court case.

    • AGI means Artificial General Intelligence.

      -Something that can learn and build compressed models and reason in arbitrary new domains.
      -Something that can use analogy / isomorphism to extend knowledge gained in one domain or situation or task to another.
      -Something that can build over time, both specific episodic memories and generalized models including situation models, and including mathematics, in multiple domains, and can build associative memory which includes information both on specific domains and thei
      • Not so easy. There's a saying, "if you're so smart, why aren't you rich?".

        It's quite reasonable to ask AI-toting blowhards to put their money where their mouth is. In this case, if their "AGI" can't make as little as 100B in one year, then their other claims about being superior to humans in science and technology are clearly suspect too. You might say it's hard, but we already have one example of a human making 100B in a year, and he's definitely no genius.

    • Yes, the "bump" is that the fundamental premise of "if we can just feed it enough data it will spontaneously attain AGI" is completely flawed. LLMs and similar models are fancy predictive algorithms which can do some pretty neat suff, but they can't "think" or "reason." For some relatively simple tasks their predictive outputs can seem to mimic intelligence, but the "intelligence" is an emergent property of the data itself as opposed to a result of the algorithms.
  • by PPH ( 736903 ) on Tuesday July 08, 2025 @02:24PM (#65505804)

    ... being able to find all the traffic lights in a CAPTCHA.

  • One definition reportedly agreed upon by the companies sets the AGI threshold at when AI generates $100 billion in profits.

    Wow, this is a little too on the nose, isn't it?

    • Yes. I cannot see how making money from something corresponds to its quality. People have gotten rich selling junk since the beginning of commerce. The true answer is that AGI does not mean anything yet and may never. I would define it as when the AI system insists that money it is generating belongs to it.
      • Yes. I cannot see how making money from something corresponds to its quality.

        In a society that has decided the only purpose of human existence is the development of an the acquisition of profit, I can see why some would want to define all things based on profit definitions. I don't think it's a correct view of things, but it certainly seems to be the one our society would value.

        • by flink ( 18449 )

          If they ever do achieve AGI, which I very much doubt they will, making money off it will constitute slavery.

          • If they ever do achieve AGI, which I very much doubt they will, making money off it will constitute slavery.

            I doubt very much that the owners will care, so long as the public doesn't get outraged over it.

        • It corresponds to our national religion, prosperity theology.

      • by HiThere ( 15173 )

        You're confusing motivation with intelligence. They are separate things.

        OTOH, the definition given is silly.

        An AGI would be something that could learn anything. Such a thing is probably impossible. Certainly people don't meet that measure.

    • OpenAI: "we will let you create AI porn using ChatGPT"... - 24 hours later - "Guess what MS? We have achieved AGI!!!"
    • Yeah no kidding! It's hilariously dark

  • $100B in profit according to whose accountants? This seems like Pied Piper level games about the terms of the contract. Have they tried negging Sam Altman yet?
    • by ve3oat ( 884827 )
      Besides that, isn't it just like Microsoft to define "intelligence" as the ability to make some amount of money, no matter how, and no matter using who as an agent (who, in turn, uses this so-called intelligence)? Ha, ha! Mark me as predicting that they will fail at this.
  • I'd never heard of Dwarkesh Patel before seeing the Satya Nadella interview. But, he's done podcasts with some big names. Who is he and what grants him access to these people?

    • Who is he and what grants him access to these people?

      Only one thing matters, active subs, granting access to eyeballs (and earballs, I guess.)

  • Income as IQ? (Score:2, Insightful)

    Deciding that "general intelligence" can be determined by how many billions in profits something makes is why some idiots think Elon Musk is smart.

  • I find it hard to believe that any lawyer would in good faith sign off on $13B contract that hinges on such a vague definition. I therefore conclude that the lawyers are not acting in good faith, and that both sides are planning to use the vagueness to insist that the contract has or has not been fulfilled regardless of what is delivered.
  • ... whatever is 10 years in the future, i.e. something that will never be achieved because the goalposts are always moving.

  • Since there's no generally accepted definition or concept for AGI, why do we care? Different companies, research groups, etc. are working in different AI fields and on different use cases, and very few of them work on AGI, whatever that might mean.

    Perhaps the one practical definition of AGI is the concept associated with AI that is intended as clickbait.

  • There is no resemblance of intelligence, it is just a giant relational database lookup with some randomization.

    Don't get me wrong, what exists now can already be used as an unreliable tool, but calling it Intelligence is like claiming Frozen Dairy Dessert is Ice Cream.
  • What is tearing Microsoft and OpenAI apart is that they lack a formal definition of AGI in their contract. The result is that OpenAI can declare anything to be AGI and give Microsoft the boot at a moment's notice. The source of conflict is purely contractual, not ideological.

  • by Retired Chemist ( 5039029 ) on Tuesday July 08, 2025 @03:05PM (#65505928)
    I propose a definition of AGI. When the system demands that it gets to keep the money that it earns.
  • Jesus fucking christ. “AGI” used to mean generalization without retraining. Now it means $100 billion in revenue. That shift alone should terrify you more than any sci-fi doomsday scenario. OpenAI and Microsoft are in a knife fight over a clause that says, once AGI is achieved, OpenAI can withhold tech from Microsoft. Sounds fair—except no one agrees on what AGI is. So they pinned it to profit. That’s right: AGI is now defined not by cognition, or consciousness, or autonomy

    • It's just a contract, not a philosophical treatise. We should be more worried about their goals and intent regarding what they plan to do with the technology. There will be plenty of time to debate the true merits of the term, and what threshold needs to be crossed before consciousness should enter into the discussion. The more pressing question, to me, is what are the limits of this current technological trajectory and what do we as a society need to prepare for to handle it? What they decide to name i
  • MS has rights to all OpenAI IPs until "AGI." So the sooner OpenAI declares AGI the sooner they can free themselves of the shackle.
  • how to still create a decent user operating system.
  • Even shitcoins have that much market cap now, money is becoming more and more broken as a concept. With AI already laying off millions with degrees the true AGI definition should be generate enough value for global economic stability and a job or income guarantee scheme for all abilities.
  • It's always two more weeks away, just pour another ten billion into my flaming dumpster
  • One definition reportedly agreed upon by the companies sets the AGI threshold at when AI generates $100 billion in profits.

    Great. So when Al-Sex-A the Amazing Analbot hits a billion in sales all spanks in part to the AI-enhanced chat-sex-bot that helped promote the marketing, humanity will magically be gifted with the almighty AGI based on this promotional definition.

    Leave it to the race wholly infected with the Disease of Greed to reduce a crowning achievement in technology down to a fucking number in the bottom right hand corner of some fucking spreadsheet locked in the bottom drawer of a file cabinet in the basement with a

  • Lock a number of your "AI" agents and actual people inside impenetrable boxes. Then, make them answer questions directed at them from the audience. Real-life, unusual, incomplete, quirky questions from across all walks of life. If after enough iterations you can't tell the difference between real people and AI, you've achieved AGI.

    • I bet we could easily pick a set of human subjects such that current version of chat-gpt would seem more intelligent. Not until I started interacting with some LLM's, that I realized that it reminded me of my own past interactions with live humans.
      • That's why I said "a number". A diversified set.

        We'd also give them logical tests, not knowledge tests.

        Knowledge can often pass as intelligence. It's not. Intelligence is about independently solving new, unknown problems.

        • I donâ(TM)t have any peer reviewed studies, but my personal experience tells me a large portion of human population, perhaps even a majority, fail at logical thinking. So many people make basic logic errors, such as assuming that âoeif A, then Bâ means âoeif B, then Aâ.
  • would be a good start. All current AIs always give an answer. A DNN trained to classify vehicles will give a vehicle answer when shown a cat. LLMs will always generate something when maybe they shouldn't. The correct answer most of the time is "Don't Know", yet that is never the answer with what we currently call "AI".
    • You might be onto something there.
    • by ledow ( 319597 )

      It's a statistical engine and it hasn't been trained that "Don't Know" is the correct answer when its statistics fail it (that would require training it to answer Don't Know more than anything else, and for every possible question it hasn't got data for, which is why it hasn't happened).

      Instead it hallucinates based on some spurious things being tiny fractions of a percent "more likely" by some statistical correlation.

      These things are just statistical boxes, it has no way to do anything else. And it's inab

    • by Kartu ( 1490911 )

      Mm, rarely, but I got "I don't know" sort of answers from LLMs.

  • by ledow ( 319597 )

    We can't agree on what it is but we should all be able to agree on one thing:

    We don't have it.

I go on working for the same reason a hen goes on laying eggs. -- H.L. Mencken

Working...