Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

OpenAI Japan Exec Teases 'GPT-Next' (tweaktown.com) 58

OpenAI plans to launch a new AI model, GPT-Next, by year-end, promising a 100-fold increase in power over GPT-4 without significantly higher computing demands, according to a leaked presentation by an OpenAI Japan executive. The model, codenamed "Strawberry," incorporates "System 2 thinking," allowing for deliberate reasoning rather than mere token prediction, according to previous reports. GPT-Next will also generate high-quality synthetic training data, addressing a key challenge in AI development. Tadao Nagasaki of OpenAI Japan unveiled plans for the model, citing architectural improvements and learning efficiency as key factors in its enhanced performance.
This discussion has been archived. No new comments can be posted.

OpenAI Japan Exec Teases 'GPT-Next'

Comments Filter:
  • Yes... exponential growth... forever... yep... I believe you...
    • by joh ( 27088 )

      Yes... exponential growth... forever... yep... I believe you...

      Don't see anyone saying forever.

      • Look at the plot in the article
        • by joh ( 27088 )

          I did. It ends with "future models" at "202x", this is far from "forever".

        • by znrt ( 2424692 )

          that graph is bogus either way. it also shows 100x greater efficiency from gpt3 to gpt4. as vague as terms like "efficiency" or "power" are in this context, there isn't a 100-fold improvement from gpt3 to gpt4 in any metric, not even remotely near that.

          the framing is so fucking obvious bullshit that they must be scraping at the bottom of the barrel for the most clueless investors left. otoh, that this is becoming so extremely dumb suggests that the reality check might be near.

          • the metric they usually apply for comparison b/w GPTs is the parameters count... which is kind of like estimating car engine power by the inverse of its gas mileage.

            Clearly there is a relationship, but if this is the metric you optimize for, you are doing something wrong.

            But its not necessarily a false claim; most marketing is like applied statistics, its not about lying its about a careful curation of facts.

            • by Shaitan ( 22585 )

              "But its not necessarily a false claim; most marketing is like applied statistics, its not about lying its about a careful curation of facts."

              Sounds like those online 'fact checker' sites to me.

              • There are many avenues for a curation of facts, but a fact-checker site is truly a curious one... Can you elaborate on where is the curation?

                on which facts are "checked" as false.vs true? does that mean there are more facts which are alse but unchecked?

                • by Shaitan ( 22585 )

                  Then you must not have reviewed the so called "fact checker" sites. These sites are used as part of a loosely organized political propaganda and misinformation system. They use a number of rhetorical tricks to undermine the critical thinking process.

                  One trick is nicely contained in your speculation "on which facts are "checked" as false vs true." Facts are neither true nor false, facts are data and may be accurate or inaccurate to some proven deviation. Truth on the other hand is a subjective and relative n

      • It doesn't have to be forever, just as long as is convenient for this hype cycle.
    • Basically rent forever, though.

      Imagine if every employee was hired through one central agency that took a cut. Same thing if an AI replaces the employee.

      Except this time, they may not even give access to the trained model itself. They may only give access to "employee" work units and keep the rest proprietary.

    • Yes... exponential growth... forever... yep... I believe you...

      Exponential growth of the con-game that is AI marketing; exponential growth of parting fools and their money, giving them shit-ware in exchange.

  • by mukundajohnson ( 10427278 ) on Friday September 06, 2024 @09:12AM (#64767798)

    I'd wait until year's end to see the results before making any conclusions.

  • Wrong answers... Faster!
  • The one country which allows of what would otherwise be illegally copied content, for AI training. Guess the policy did land Japan a big fish.

  • Comment removed based on user account deletion
    • Per month.

    • by Shaitan ( 22585 )

      Yeah, certainly not for long. Whatever truth their is to their advances there will be a pattern to it and where there is a pattern there is something to see and implement independently of them.

    • by gweihir ( 88907 )

      It is 2k/month. And no, I do not see that working either. They do probably not see it themselves, but merely try to keep the hype alive. The whole thing has more and more overlap with the typical progression of a large-scale scam.

  • by Baron_Yam ( 643147 ) on Friday September 06, 2024 @09:38AM (#64767876)

    As soon as I read "deliberate reasoning" I called bullshit. If someone has discovered how to imbue actual intelligence in an AI, it would be a much, much bigger deal than this.

    • Comment removed based on user account deletion
      • I didn't say it was impossible, I said it hasn't happened in this case.

      • Your denials to the contrary, I can see no fundamental obstacle to the creation of true "artificial intelligence". When we have overcome that obstacle (which may well be quite soon), "artificial" intelligence will almost certainly grow at a rate we might predict but probably won't.

        You called bullshit - but there's a lot of really smart guys who say you're wrong, and a lot of money to back them up.

        There's also a *LOT* of hyperbolic level market-speak surrounding AI right now, trying to hype that bubble, so it's hard to believe the hype, when so much hype has turned out to be just that - hype, with no truth coming to back the hype up ever.

        Not that I think real artificial intelligence is impossible. I just don't think we're on the path to it with the LLM hunt. Maybe they managed to make the leap I keep expecting someone to make and changed the fundamentals, but OpenAI thus far has bee one of the larges

      • LOL no, we have no idea how 'reasoning' works in a biological brain, therefore we have no chance of building machines or writing software that can do that until we find a way to discover how it works neurologically.
        If you insist on contradicting me, then please do cite references and whitepapers from authoritative sources on the subject that detail the exact mechanisms by which 'reasoning', 'cognitive ability', 'consciousness', and so on are accomplished.
      • by guruevi ( 827432 )

        No, there is a lot of marketing who says we (the scientific community at large) are not entirely wrong but we have found a way to fake it.

        Preliminary testing with their latest model shows a regression in quality because their previous models have produced so much garbage, it is becoming garbage-in, garbage-out without significant human filtering. This new model, if you read between the lines requires artificially produced training sets because the organic training sets have become so polluted, those artific

      • by Shaitan ( 22585 )

        True artificial intelligence would be sentience, self-aware, and self-deterministic. WE don't have the slightest clue how to do A or B so that seems like a problem. Though I think the secret to B at least might be a reverse Turing test, the system should not be able to determine it isn't human or who among the participants in the conversation is not.

        It doesn't matter, the biggest obstacle is that we WON'T build anything which fits C. We are too paranoid and too likely to project our own qualms upon it. We c

      • by gweihir ( 88907 )

        I can see no fundamental obstacle to the creation of true "artificial intelligence".

        That is because you are lacking in general natural intelligence. Here, you are simply ignoring the observable facts because you want them to not be true.

    • As soon as I read "deliberate reasoning" I called bullshit. If someone has discovered how to imbue actual intelligence in an AI, it would be a much, much bigger deal than this.

      Exactly this. Ask any decent neuroscientist whether we know how 'reasoning' works in a human brain, and they'll tell you we don't. Ask them if so-called 'AI' has the ability to 'reason', and they'll laugh.

      • by Shaitan ( 22585 )

        You assume it has to work the same way as in the human brain but the training sets contain mass quantities of logic patterns extracted from human brains reasoning. It is both possible to build an algorithm that converges to replicate that function without being able to define it for ourselves [this is certainly the case with modern voice and image recognition] and also possible that an algorithm could be developed which systematically queries [prompts] the token predictors in a way that produces a reasonabl

    • In reality it isn't such a bullshit as you might think. We humans are also nothing more then a biological computer/robot, our brains are merely a bunch of bio-electrical connections. If you think our thoughtprocess can't be recreated in a computer then you really are very naive. Yes, it will still take some time to really emulate the way we think, but it's just a matter of time before we reach that level, as computerpower increases it won't be even that long.
    • by gweihir ( 88907 )

      Yep. They are announcing they have the Holy Grail of AI. Without any proof or factual basis and without any indications from 50 or so years from AI research that this is even possible. This is pure, concentrated bullshit, nothing else. The usual idiots will fall for it and aggressively claim it is all true though.

    • @Baron_Yam If strawberry is based on Quiet-STaR (Stanford), there's a better explanation of your observations than "bullshit": the new trick for improving deliberate reasoning is still very, very expensive. In Quiet-STaR, the model learns to use a scratchpad by gradient descent (not in-context learning, not RL; it's not like "think step by step"). The (very big) downside is that for K tokens of scratchpad, it takes K times as long to generate each output token. They've probably found ways to improve on that
    • As soon as I read "deliberate reasoning" I called bullshit. If someone has discovered how to imbue actual intelligence in an AI, it would be a much, much bigger deal than this.

      This is not a big deal because it's been around for a while, just not built into the basic models. Instead it's been used as a prompt engineering technique to get the models to solve a problem step by step. It's often called chain of though prompting and was proposed in 2022. https://en.wikipedia.org/wiki/... [wikipedia.org]

      Intuitively, you c

  • BS heavy (Score:4, Insightful)

    by dfghjk ( 711126 ) on Friday September 06, 2024 @09:38AM (#64767878)

    The summary here says "100-fold increase in power" while the article says "100x better", "100x more powerful" and an Orders of Magnitude (OOMs) leap. Separate from the lame stupidity of these claims, and the obvious fact that at least some of the authors don't know what any of that means, what precisely is this "power" or "better" metric? How is this improvement measured? Rather than just claimed?

    Also, the "100x performance increase", yet another different claim, is said the be achieved "without wasting significantly more computing resources". Now, ignoring that resources to improve performance are NOT wasted, by definition, if we assume what was meant by that claim and the subsequent claim that the "improvement comes from better architecture and learning efficiency", just how f*cking terrible must the current implementation be to allow for a 100x increase in performance in a single generation?

    It's really obvious that OpenAI is mostly fraud, but it seems that what little isn't fraud includes very little competence. Must be nice being paid so much while sucking at your job.

    • Comment removed based on user account deletion
      • by HiThere ( 15173 )

        Calling AIs unintelligent is probably incorrect. Ungrounded would be better. It's still a bit wrong, as they are grounded in the texts they they have been taught on, but it's pretty close. I think it's best to interpret AI responses as "dreams of the AI". Some parts hold together, some don't, and it's likely to lose the chain of thought it's been following.

        • by gweihir ( 88907 )

          Calling LLMs "non-intelligent" is merely descriptive and entirely accurate. I tried ChatGPT on one of my exams. It got 100% of the questions that require minimal thinking wrong, including the ones that even the dumbest students got right.

          • by HiThere ( 15173 )

            I'd need more context, but I suspect that what you're noticing is that it isn't grounded on the parts of intelligence that you are interested in. Intelligence isn't a single crystalline method, it's a grab-bag.

            You probably have an "oversimplified model" of what an LLM is. Either that, or you've got an exact definition of intelligence that isn't shared by common usage (and which you haven't made explicit). My model is "intelligence is a tool box of ways of dealing with problems coupled by a recognition of

            • by gweihir ( 88907 )

              Stop trying to justify why LLMs are "intelligent" despite all the clear signs they are not.

              Here you tried:
              1. AdHominem
              2. Redefining "intelligence"

              Seriously, accept that LLMs are (again) not what the AI pushers promised and move on.

              • by HiThere ( 15173 )

                You haven't given an explicit definition of intelligence. I offered the one I was using, but I'd be willing to accept yours, if it were explicit enough to use.

                • by gweihir ( 88907 )

                  You are simply trying to confuse the issue because there is no other way you can "win". And now you insist on that transparently bogus "strategy". Seriously? Well, I am using this definition here and most people actually do too and that makes your question after a definition bogus as well: https://en.wikipedia.org/wiki/... [wikipedia.org]

                  Only those desperate to ascribe properties to AI and other technologies that they do not have use "intelligent" as a property for things like vacuum cleaners. Obviously, marketing has no s

    • by gweihir ( 88907 )

      Unless proven otherwise, I will assume 100x more credible or frequent hallucination and 100x more overlooking of critical details. LLMs are _dumb_.

  • The 'GPT-Next' on the graph he was showing was a hypothetical stand-in for 'the next model.' This was not an announcement of a new model.
    • by gweihir ( 88907 )

      So a great announcement of .... nothing? That fits the AI hype nicely. They now hallucinate the next progress steps.

  • What metric are they using when they say 100x better?

    System 2 reasoning, though, I know what that is, from that book "Thinking, Fast and Slow". System 1 is looking at an angry person and knowing they are angry. Fast and effortless for people. System 2 is looking at a multiplication problem then doing the long division in your head and getting the right answer. Slow and exhausting for people. Attaching System 2 to an AI is pretty easy: give them access to a calculator (and proof engine, and Wolfram Alpha) a

    • by gweihir ( 88907 )

      Simple: This is what marketing told them to claim to make the lie believable. And look, it works nicely on the usual idiots.

    • @bob_jenkins What you've described is indeed standard, at least for proprietary models. But their advancement will be more than that. They'll have implemented a trick for improving system 2 reasoning that is more low-level and less adhoc than the techniques used to improve tool use (see section 4.3 of llama 3.1 paper). It won't be THE trick though.
  • The real danger of so-called """AI""": wasting valuable resources on pointless bullshit.
    • A gret comment i saw recently was something to the effect of "I want AI to do my laundry and run my errands so i can work on art and writing, not AI doing my art and writing so I can do laundry and run errands"

      • by gweihir ( 88907 )

        Hahha, that is nice one indeed!

      • A gret comment i saw recently was something to the effect of "I want AI to do my laundry and run my errands so i can work on art and writing, not AI doing my art and writing so I can do laundry and run errands"

        Take my upvote.

  • Soo, can this thing hallucinate 100x as hard or can it overlook critical detail 100x as reliable?

  • What is "power" - is that a new benchmark, or are they talking about increasing the datacenter power consumption by 100x ?

Experiments must be reproducible; they should all fail in the same way.

Working...