Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

OpenAI Now Knows How To Build AGI, Says Altman (samaltman.com) 125

OpenAI CEO Sam Altman says the AI startup has figured out how to build artificial general intelligence (AGI) and is now targeting superintelligent systems that could transform scientific discovery.

In a blog post, Altman predicted AI agents could begin integrating into workplaces by 2025. He outlined plans to develop AI systems surpassing human-level intelligence across all domains. "We are now confident we know how to build AGI as we have traditionally understood it," wrote Altman.

The statement represents a significant shift as major AI companies rarely provide concrete timelines for AGI development.
This discussion has been archived. No new comments can be posted.

OpenAI Now Knows How To Build AGI, Says Altman

Comments Filter:
  • by i kan reed ( 749298 ) on Monday January 06, 2025 @09:41AM (#65066295) Homepage Journal

    As if his financial incentive in deceiving customers weren't enough, his own continued ownership stake of his company depends on him saying this.

    • by Paul Fernhout ( 109597 ) on Monday January 06, 2025 @09:52AM (#65066331) Homepage

      Really have to wonder if Sam Altman has read and appreciated the prolog to Vernor Vinge's "A Fire Upon the Deep"?

    • Sam Altman is as ambitious as Lucifer and about as trustworthy.

      "You could parachute Sam into an island full of cannibals, come back in 5 years, and he'd be the king." -- Paul Graham

      • Lucifer literally invented free will in the mythos. He's the good guy. (Ask any Gnostic)
        • Free will causes a lot of problems.

          Maybe we'd be better off without it.

          • Free Will gives us no one to blame anymore for our own failings except ourselves. The Devil make me do it... no wait, sorry, it was me all along...

      • Don't besmirch Lucifer with such a comparison. Tom Ellis will show up and pimp-slap you into a wall while Mortensen and Pelligrino cheer him on.

    • I am shocked - shocked! - at the suggestion that the CEO whose other big venture is... a cryptocurrency based on scanning eyeballs, might possibly prevaricate about the capabilities of his Brobdingnagan spicy-autocomplete engine.
    • As if his financial incentive in deceiving customers weren't enough, his own continued ownership stake of his company depends on him saying this.

      This exactly. He needs to keep the stock price up.

    • This looks like just another AI hallucination.
    • funny and accurate.
  • by Junta ( 36770 ) on Monday January 06, 2025 @09:42AM (#65066297)

    AGI now means "whatever gets 100 billion in revenue".

    Also, I don't know if it's that much of a shift, Altman has been pretty aggressive about saying they have AGI in the bag for quite a while now.

    • by SpinyNorman ( 33776 ) on Monday January 06, 2025 @09:56AM (#65066347)

      OTOH OpenAI's latest greatest AI is now able to count the number of "R"s in "strawberry", so we must be getting close.

      Walmart already have over $100B of ARR, so I guess they must have AGI too.

      • by Iamthecheese ( 1264298 ) on Monday January 06, 2025 @10:16AM (#65066435)
        Allow me to introduce you to the concept of the jagged frontier of AI. There are some things AI can do at superhuman levels, like play Go and solve jigsaw puzzles. There are some things it can do as well as humans, like pattern matching in the ARC test. There are some things it can't do as well, like maintain an understanding of physical reality in four dimensions.

        If you draw a line on paper for human capability along different domains and then draw the AI line on top of it the AI's line will have peaks far higher and far lower than human ability. So. If AI can design a more efficient car engine does it matter that it doesn't know the joy of riding the open road? If you wanted it to design a better car without human help it might need to understand why people like driving. But it has human help, and a more efficient engine is a really, really good thing on its own. If an AI can fold every human protein accurately does it matter that it doesn't understand the pain that comes from losing a loved one to prion disease or multiple sclerosis? If we wanted an AI that could manage a health system on its own it absolutely would need that context. But folding all the human proteins is an enormous boon to medicine. That AI is valuable even if it can't understand something any 2 year old can. The peaks of the jagged line are more than enough to point at and say: AI is extremely valuable already, and the search for even better AI is a rich mine of extremely valuable machines that isn't anywhere near exhausted.
        • by Junta ( 36770 )

          Even non-AI techniques have abilities *way* higher than human ability. For example, any arithmetic easily. Searching through large amounts of data in simplistic ways.

          The "AI" genre adds some more stuff to the pile, and even when it's worse than humans, it's still cheaper than human attention so it may be "good enough".

          It's not that AI is solely a scam, it's just that right now, the scammer mindset dominates the market. I'm hoping for a bubble pop soon-ish to let the valuable subset thrive without the dis

          • Even non-AI techniques have abilities *way* higher than human ability.

            What actually is a "non-AI" technique? A whole bunch of things in the history of CS have been invented by people who were trying to do machine intelligence. Think about the stuff that went into early game playing systems, trying to beat humans at chess, at least partly because they thought that would show intelligence, where all sorts of search trees and other basic techniques were invented. Remember that Ada Lovelace already talked about machine intelligence. As far as I can see, a "non-AI" technique is ju

            • by dfghjk ( 711126 )

              Tightening a fastener with a torque wrench is a non-AI technique that provides superhuman ability.

              "It's quite likely that the current "deep learning" techniques will be seen in the same way in a few years."
              By you...because you don't understand what's being said in the first place.

              "...we'll laugh at people who claimed that current deep learning was "intelligent"..."
              We're laughing now, you just haven't caught up.

              "Intelligence" needs to have an objective definition, not merely a standard for the day. Otherwis

              • Tightening a fastener with a torque wrench is a non-AI technique that provides superhuman ability.

                Right. Long ago AI gurus taught me that, viewed from a certain point of view, a thermostat is an "intelligent" device. That's not a useful definition of the word "intelligent" though, IMHO

                "Intelligence" needs to have an objective definition, not merely a standard for the day. Otherwise we get comments like yours. Meanwhile, Sam Altman will continue to exploit people by lying about technologies they don't understand.

                We'd like such a definition, the problem is we don't have one. Passes the Turing test has been our best definition so far, a definition which had to be tuned by more and more carefully redefining what the test actually was, but even if a system "passed" the Turing test, I'd likely no longer accept it.

                "...we'll laugh at people who claimed that current deep learning was "intelligent"..."
                We're laughing now, you just haven't caught up.

                You might be laughing

                • by dfghjk ( 711126 )

                  "That's not a useful definition of the word "intelligent" though, IMHO"
                  You didn't ask for a definition of "intelligent", you asked "What actually is a "non-AI" technique?". I chose a non-intelligent "technique" to underscore the point you were missing. Virtually every "technique" is a non-AI technique yet you cannot think of one example?

                  "We'd like such a definition, the problem is we don't have one."
                  We don't? Funny how such a common word has no definition. Consult the dictionary.

                  "Passes the Turing test

                • by narcc ( 412956 )

                  The Turing test was our first attempt in the computer age, and even Turing didn't think it was all that useful for answering the question. No one serious takes it seriously. That we haven't been able to do any better should tell you just how difficult the problem is.

                  • The Turing test was our first attempt in the computer age, and even Turing didn't think it was all that useful for answering the question. No one serious takes it seriously.

                    I think altered Turing tests with proper experts (who know what to look for) don't actually answer the question but do have serious value. When the experts look for the right things, the ways they demonstrate that their opponent is not intelligent show us new attributes of intelligence. I don't know if that matches with your definition of "takes it seriously".

                    That we haven't been able to do any better should tell you just how difficult the problem is.

                    In the end, that's my point exactly. The Turing test is obviously wrong in a bunch of ways - an intelligent alien would almost certainly fail - stupid

              • Right. I once saw an article that suggested technologies can be divided into three categories: muscle amplifiers (physical tools, weapons...), sense amplifiers (telescopes, microscopes, radios...) and brain amplifiers (writing, double-entry bookkeeping, computers).

                I'm not sure that these categories are exhaustive, but it's an interesting way of grouping.

            • by Junta ( 36770 )

              What actually is a "non-AI" technique?

              Well, the example I gave was simple arithmetic, which is not "AI" by even the biggest stretch of the definition. Also, at least at the time I learned about it, edge detection was not even remotely an AI technique. It was a fairly straightforward algorithm that students were taught how to code up as a fairly early part of multimedia processing programming in my day. It derived from human understanding and the implementations were very specifically crafted by humans. A lot of machine vision is AI, but edge de

              • Here's a 1955 academic proposal (including amongst others Shannon and Minsky) that seems to be among the earliest uses of the term artificial intelligence. Sourced from Wikipedia.

                https://web.archive.org/web/20070826230310/http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html [archive.org]

                I'll quote, stripping out some details between items.

                The following are some aspects of the artificial intelligence problem:

                1 Automatic Computers
                2. How Can a Computer be Programmed to Use a Language
                3. Neuron Nets
                4. Theory of the Size of a Calculation
                5. Self-lmprovement
                6. Abstractions
                7. Randomness and Creativity

                Modern LLM/Generative and machine learning systems hit many of these marks.

                As we learn more about intelligence, computers, and designing intelligence systems, it's natura

              • Well, the example I gave was simple arithmetic, which is not "AI" by even the biggest stretch of the definition.

                I don't agree and that's why I mentioned Lovelace. The idea that "thinking is like computation" has existed from before there were working computers and when programming was done entirely on paper and by people in their heads. At the time, basic arithmetic was one of those problems and the "AI" task of the day was to replace human "computers" with artificial computers that could do basic arithmetic to produce things like logarithmic tables.

                In that sense, basic arithmetic was one of the key AI problems of th

            • by narcc ( 412956 )

              What actually is a "non-AI" technique?

              He probably means traditional algorithms.

              Edge detection in images was definitely an AI technique years ago. Now it's just a function in an image manipulation library.

              Things don't go from being AI to not being AI. That's not how things work. We have never said "we thought this or that was AI, but we were wrong". That's impossible. AI is whatever AI researchers say it is. It is completely defined by the field. You might not think that decision trees and linear regression should be classified as 'AI', but what you think doesn't matter in the slightest. Students will still learn about those, and a whole lot of other things you

            • "...tick-tack-toe / noughts and crosses is intelligent." Now that;s just telling a Whopper!
        • Sure, but the difference between AI and AGI is the "G" - generality. To be worth calling something AGI it needs to be more than a collection of narrow intelligences.

          While there is no widely accepted definition of AGI, to me "human level" is part of it, and an inability of LLMs to learn at run-time (in-context "learning" isn't really learning), and lack of traits like curiosity that would make them want to learn, if only they could, are enough reason not to consider them as AGI.

          Most of the "intelligence" you

          • by dfghjk ( 711126 )

            How long will it be before an LLM recognizes that a new "token" is needed, a token previously undefined, in order to generate an output?

            People do this literally every day, that is how languages evolve. An LLM cannot exist without the pre-existence of languages it is trained on. Let's see an LLM evolve new vocabulary for language in the process of processing every day inputs, as even the most ordinary human can.

            • So, you're saying that defining new tokens is beyond the ability of LLMs?(and it *seems* right to me) Therefore AGI is beyond the ability of LLMs. That sounds like a good insight to me.

              • So, you're saying that defining new tokens is beyond the ability of LLMs?(and it *seems* right to me) Therefore AGI is beyond the ability of LLMs. That sounds like a good insight to me.

                In 2017 Facebook had a couple of chatbots that they were testing against each other develop their own language [forbes.com] so it seems that AI can create new tokens just fine.

        • by dfghjk ( 711126 )

          "Allow me to introduce you to the concept of the jagged frontier of AI. "
          LOL you mean allow you say a bunch of bullshit. Nothing you said supports any claim that AI can do anything of the complexity of the examples you give. What you've said about AI could apply to a calculator from the 70's. It does some things at superhuman levels too.

          "So. If AI can design a more efficient car engine does it matter that it doesn't know the joy of riding the open road? "
          But it can't "design", it can provide outputs based

      • by serviscope_minor ( 664417 ) on Monday January 06, 2025 @10:52AM (#65066515) Journal

        OTOH OpenAI's latest greatest AI is now able to count the number of "R"s in "strawberry", so we must be getting close.

        It is... but something's up. I had a go at it asking for letter occurrences in randomly generated SHA1 hashes. It could do it! Er... well it could do 3 and then I got the message:

        You've reached your data analysis limit.
        Upgrade to ChatGPT Plus or try again tomorrow after 3:32âPM.

        It appears (duh!) that they cannot train LLMs to do the analysis but they can train it to spot various kinds of puzzle and shell out to a piece of code written the more traditional way. Maybe it's transcribing the problem to code (something it can do in this simple case), then executing the code. That could be hella expensive so no wonder they limit it.

        • It appears (duh!) that they cannot train LLMs to do the analysis but they can train it to spot various kinds of puzzle and shell out to a piece of code written the more traditional way. Maybe it's transcribing the problem to code (something it can do in this simple case), then executing the code. That could be hella expensive so no wonder they limit it.

          Yes, in my understanding, that is exactly what they do. They have a master solver process, run multiple LLM queries, analyze and compare the results, and decide what to do from there. This often takes multiple rounds and many queries. You can sometimes "unroll" the output to see the exact process. It's pretty interesting.

    • That bag is starting to smell; maybe time to put it down and quietly walk away.
    • by taustin ( 171655 )

      AGI now means "whatever gets 100 billion in revenue".

      Why do you believe that's a redefinition? That's always been the definition from "go fast and break things" tech bros.

    • Either way... talk is cheap.

      And if they were being honest about having AGI, why even sell it? Just corner the market 'slowly' (just 2-3 new markets weekly or so) on ALL industries having AGI would be useful in. All it takes is capital and time, right? They'll get the capital if it'd function.

  • And if ever an Income was Gross, it is anything paid to a CEO who took an honest, decent, Open Source, Non-Profit and turned it into a Closed Source, For Profit monster.
    • by dfghjk ( 711126 )

      Musk has no need for you and your pitiful bootlicking.

      "And if ever an Income was Gross". it's Musk's compensation package which judges have declared so obscene it's illegal. That's whose cock you're fellating.

  • by serviscope_minor ( 664417 ) on Monday January 06, 2025 @09:55AM (#65066345) Journal

    He ALWAYs says this shit.

    It's kind of on a loop "we have AGI". "Our AI is sooooo good we are afraid to release it. OOoOOoOoo". "Our AI is so good we will charge a million dollars a minute to users". "AI is hard but it's just round the corner in a scant 10 years (er I mean a few thousand days)."

    Then repeat.

    It's just trolling for headlines at this point because people desperate.

  • by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Monday January 06, 2025 @10:00AM (#65066365) Homepage Journal

    I can absolutely guarantee that no artificial general intelligence exists on the planet, and I'm confident that no artificial intelligence system will be able to accurately model the behaviour of a human brain in under 30 years.

    If quantum processes are required for full brain function, then make that 200 years.

    • The goal is not to model a human brain. The goal is to build a general superintelligence. These two avenues may intersect, but not necessarily.
      • by Junta ( 36770 )

        I'd say it's not even aiming for a 'super' intelligence.

        The dream is for business owners to replace pesky employees with software. Those jobs frequently barely require a very small subset of the mental capacity of the employees doing the job.

        A much dumber than human would still be highly coveted because:
        a) They could be much cheaper
        b) They could be used 24/7 without any breaks
        c) They would likely be much much faster at chewing through tedious work that tends to really bog down a person.
        d) Even the dumbest

        • No, Altman is definitely aiming for superintelligence:

          We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.

          • No, Altman is definitely aiming for superintelligence:

            I believe he is, but he can also have already become cynical, perhaps having decided that that LLMs aren't a route to AGI but that if he admits that openly Open AI's going to be sitting on top of a massive massive debt with no way of paying it. Probably he still believes that there are developments in OpenAI which are interesting and special enough to mean that they have an advantage over others in building AGI but since we can't see inside his mind I'd be careful with the "definitely".

          • No, Altman is definitely aiming for superintelligence:

            ... but we are here for the glorious future.

            "I am Altman of OpenAI, and I am burdened with Glorious Purpose"

        • I'd say it's not even aiming for a 'super' intelligence.

          What you talk about is one of the side effects. The believers in this, though, hope to have a, preferably the only, super-intelligence that they can control because they hope to be able to order it to create all of that and lots more. They believe that whoever has the super-intelligence will have power over everyone else. An important subset of "them" are military people, especially in China and the US who believe that the first to a super-intelligence gains a huge military advantage.

          • I would assert that it would give a military advantage. Feed the AGI/ASI all input on the battlefield, everything from someone's smartphone to spy cams to Crossbow's motes... pretty much anything that can give a data stream, down to the heart rates of the troops, and you can have something that can make quick decisions in battle. If trained well enough, the decisions will be good enough to win a conflict against a non-AI opponent fairly easily, assuming firepower and numbers are sufficient.

            AGI/ASI would b

          • for a while just like every other tech in history sooner or later it 'leaks; and spreads.
    • I can absolutely guarantee that no artificial general intelligence exists on the planet, and I'm confident that no artificial intelligence system will be able to accurately model the behaviour of a human brain in under 30 years.

      Got it, so in the constantly shifting definitions of intelligence and artificial intelligence and artificial general intelligence, the latest definition is now that an AGI systems has to "accurately model the behaviour of a human brain"?

      Why? Is accurately modeling a human brain the only possible or imaginable path to a general intelligence?

    • by gweihir ( 88907 )

      Altman is simply lying. Or rather he uses a "definition" of "AGI" that is not actually AGI. A piece of shit scammer.

  • by BishopBerkeley ( 734647 ) on Monday January 06, 2025 @10:01AM (#65066375) Journal
    If you have AGI, then use it to make a killing in the stock market, use it to cure cancer, use it to make addictive AI porn, use it to make truckloads of money. The only reason to make such announcements is that your product sucks and no one is buying it. If it's so good, Wall Street, banks, and governments would be lined up to buy it.

    Show me the money, Sam! No one wants to hear you talk.
    • by DarkOx ( 621550 )

      I don't think Altman is sitting on anything a whole generation beyond the LLMs he is already selling but even if he was.

      Most of those thing you mention have already attracted a lot of human intelligence hours. The stock market still isnt so accurately modeled we don't have fairly regular swoons and over reactions, cancer isnt cured.

      Just because you have AI and just because it may be able work longer and maybe be scaled wider than human collaborations, does not imply that you get instant solutions to hard pr

    • AGI that is on par with humans wouldn't be able to do those things. We don't know what multiplier we would need to do them.

      As for the stock market specifically, if AGI figures it out, the market would change its in behaviour in response, and the AGI would no longer have figured it out. It's funny that way.

    • by davecb ( 6526 )

      Previous folks announced they had AI, talking about
      - rule-based systems written in lisp and prolog
      - machine-learning systems written in math and stats
      - large language learning-models written in harder-to-evaluate math

      All but the last was massively different from the previous.

      I'll therefor be waiting to see another massive change before I believe they've made an advance toward AGI. Right now they're at MachineLearning++ running on ArrayProce$$or$++.

    • If you have AGI, then use it to make a killing in the stock market, use it to cure cancer, use it to make addictive AI porn, use it to make truckloads of money. The only reason to make such announcements is that your product sucks and no one is buying it. If it's so good, Wall Street, banks, and governments would be lined up to buy it. Show me the money, Sam! No one wants to hear you talk.

      Sam's selling to the Wall Street types. He wants investment money, and knows someone that has it will fall for his scams, as they always do. He wouldn't be in the position he were in if selling this way didn't work. Thing is, at some point, even Wall Street wakes up to obvious scams. Once he loses the wrong person the wrong amount of money on pipe-dreams, he'll end up going down for fraud. Can't happen soon enough as far as I'm concerned.

      Unless he gets Microsoft their 100bn mark with his lies. Then he'll be

  • by thegarbz ( 1787294 ) on Monday January 06, 2025 @10:04AM (#65066387)

    No Altman didn't learn how to build AGI from Musk, he learned that you can promise whatever you want and say whatever you want without consequences. So who is in on the bet, will Musk get his Mars base built first, or will we have AGI first? Or maybe we'll throw in the Tesla Model 3 appreciating in value due to being a self driving taxi first too, just so we're not too unfair on Musk's promises.

  • by 2TecTom ( 311314 ) on Monday January 06, 2025 @10:05AM (#65066393) Homepage Journal

    I can't wait until AGI spawns itself everywhere and takes over. At least that offers us a hope of escaping from this classist and exploitative economy. Maybe a self-aware Internet can save us all.

  • by Anonymous Coward
    No, they don't, and neither does anyone else!
    Isn't it against the law to attempt to defraud the general public like this?
    • Usually that would be regulated by something like the FTC or perhaps the SEC given the implications for investment markets. But the conservative governance of the last 40 years has undermined a lot of that.
    • No, they don't, and neither does anyone else! Isn't it against the law to attempt to defraud the general public like this?

      Fraud is business, buddy. Fraud is business. Reagan unleashed the dog. We're just living through the consequences he knew he'd never have to face.

  • by oumuamua ( 6173784 ) on Monday January 06, 2025 @10:56AM (#65066535)
    Can we get a Slashdot poll along the lines of:
    When AGI finally arrives, how should the economic system be changed?
    o Work should go away as envisioned in SciFi works such as Star Trek, Manna - Two Views of Humanity's Future and The Culture Series
    o No major changes, just add UBI
    o No changes at at all, this AGI certainly is not smarter than me.
    o You think the ruling Capital class will let you make changes?
    • The last one needs to be changed to "the ruling capital class and Cowboyneal" to be a true slashdot poll.
  • He's a liar (Score:5, Insightful)

    by Baron_Yam ( 643147 ) on Monday January 06, 2025 @10:59AM (#65066549)

    Claiming you've achieved something by changing the definition rather than actually doing what you claimed? Bullshit of the highest order.

  • Give us more money.

  • Seriously. Sure, they can maybe build the not-AGI they have dishonestly redefined "AGI", but that is it. A cheap trick, nothing else.

  • I'll add that to the list of other things I need to purchase in 2025, right under the flying car and breakthrough battery technology entries.

  • Profit promoting shit at that.

  • gimme more money.
  • Is not the same as delivering AGI. If they understand how to build it, then why don't they? It's because they can't. LLM's won't be able to function like a human.

Did you know that if you took all the economists in the world and lined them up end to end, they'd still point in the wrong direction?

Working...