Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Technology

OpenAI Working On New Reasoning Technology Under Code Name 'Strawberry' (reuters.com) 83

OpenAI is close to a breakthrough with a new project called "Strawberry," which aims to enhance its AI models with advanced reasoning abilities. Reuters reports: Teams inside OpenAI are working on Strawberry, according to a copy of a recent internal OpenAI document seen by Reuters in May. Reuters could not ascertain the precise date of the document, which details a plan for how OpenAI intends to use Strawberry to perform research. The source described the plan to Reuters as a work in progress. The news agency could not establish how close Strawberry is to being publicly available. How Strawberry works is a tightly kept secret even within OpenAI, the person said.

The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source. This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers. Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time."

On Tuesday at an internal all-hands meeting, OpenAI showed a demo of a research project that it claimed had new human-like reasoning skills, according to Bloomberg, opens new tab. An OpenAI spokesperson confirmed the meeting but declined to give details of the contents. Reuters could not determine if the project demonstrated was Strawberry. OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets. Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence.

This discussion has been archived. No new comments can be posted.

OpenAI Working On New Reasoning Technology Under Code Name 'Strawberry'

Comments Filter:
  • Lets say the group developing the code and restrictions that inherently make reasoning possible have the reasoning abilities of a devout religious fruitcake or the brain capacity of MTG.

    Then the results we get are things like AI telling you to shove a light up your ass to cure COVID, or that a law preventing non-citizens from voting in Federal elections is needed.

    Point here is, the people developing the logic are just as human as the next person. It will get messed up, without question.
    • The intelligence of the creator is not an indicator of the intelligence of the things that they create.

      Case in point: smart people are producing dumb AIs.

      • by HiThere ( 15173 )

        More to the point Samuel's checker playing program, http://incompleteideas.net/boo... [incompleteideas.net] , which could beat him (and lots of other folks) at checkers.

      • by gweihir ( 88907 )

        Let me fix that for you: Smart people without moral or integrity are producing dumb AI to rake in tons of money from the clueless.

        Actually competent AI research has basically discounted LLMs long ago because the approach is fundamentally flawed and cannot be fixed. It makes for great demos though and that is where the scam artists come in.

        • by Bongo ( 13261 )

          Let me fix that for you: Smart people without moral or integrity are producing dumb AI to rake in tons of money from the clueless.

          Actually competent AI research has basically discounted LLMs long ago because the approach is fundamentally flawed and cannot be fixed. It makes for great demos though and that is where the scam artists come in.

          Please, some place I can find a nice summary of this?

          (Or shall I ask ChatGPT :-D )

    • https://www.cedars-sinai.org/n... [cedars-sinai.org]

      But don't let facts slow your roll.

      • by Tablizer ( 95088 )

        The authors admitted it's just a speculative idea. It still doesn't appear it's been tested on actual cases.

        Could be tricky anyhow because UV doesn't travel that deep into the body, and can cause sun-burn. The body countering the sub-burn could take metabolic resources away from virus battles, as both require removing or repairing damaged cells.

        I'm not saying it's impossible, only very preliminary and speculative.

    • by Tablizer ( 95088 )

      A common feature of trying to reason with human political & religious trolls is they often try to change subject to a roughly related topic when you start cornering them with logic. If one doesn't let the reasoning bot change the subject, in theory you get it to admit to contradictions or unsubstantiated claims/assumptions.

      • by Tablizer ( 95088 )

        Addendum: forced focus still wouldn't work on OrangeGPT because it would deny making the statement you are comparing to point out its contradiction. "I didn't make statement #13, your laptop must have a bug, or the Deep State hacked #13 in."

  • Demands will be met. Taxes will be going up by the way. You remember those things, right? Taxation without representation will need some more tea.

    • "Humans have looked up at the stars for millennia and wondered if there was a deity up there deciding their fates. Today, they'll be right, and the world will be an undeniably better place."
      - Greer, Person of Interest. Right as said "deity" was implementing a set of "corrections." (Effectively executing order 66.)
  • I had a co-worker go nuts at a strawberry festival and apparently the tiny seeds caused a bowel obstruction and other fun stuff related to that, and he almost died, was in the hospital for a few weeks.

    Naturally I told everyone he was in the hospital cause he got a golf ball stuck up his ass, but maybe this binge of strawberries will do its job

    There are 2 R's in "strawberry".

    • There are 2 R's in "strawberry".

      ROFLcopter!

  • by TheStatsMan ( 1763322 ) on Friday July 12, 2024 @11:10PM (#64622471)

    It typifies everything the valley is about. Get out while you can.

  • by 93 Escort Wagon ( 326346 ) on Friday July 12, 2024 @11:24PM (#64622483)

    And, no matter what question you ask it, the response will be "PPBBBBT"!

  • Horseshit. (Score:4, Insightful)

    by Rick Schumann ( 4662797 ) on Friday July 12, 2024 @11:33PM (#64622487) Journal
    We haven't got a clue how 'reasoning' works for us, why the fuck should anyone with at least two working brain cells believe that they can make machines that can do that?

    More likely they're baiting venture capitalist for investment money.

    • Re:Horseshit. (Score:5, Interesting)

      by quantaman ( 517394 ) on Friday July 12, 2024 @11:54PM (#64622503)

      We haven't got a clue how 'reasoning' works for us, why the fuck should anyone with at least two working brain cells believe that they can make machines that can do that?

      More likely they're baiting venture capitalist for investment money.

      We've had abacuses long before we knew of the existence of neurons, much less how networks of neurons might work together to add numbers.

      Understanding how a human doesn't isn't necessary for a machine to replicate.

      As for OpenAI's claims, already LLMs can give you a decent chain of reasoning, they just tend to get confused sometimes. Perhaps they've made some big improvement there?

      One thing that does severely limit LLMs is context, even if you increase the context it's still limited. That means there's a limit to the complexity it can manage and how far you can push it before it loses track.

      The big thing that human brains do is actively adapt, even though our context is still limited in many ways we can learn by acquiring skills and lessons that persist past our short term memory.

      OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets. Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence.

      I wonder if that's what they're trying to do, alter the network so it does retain some sort of long term memory of the exchange.

      • already LLMs can give you a decent chain of reasoning...

        ...which was fed to them from somewhere and they regurgitate in parts by using whatever frequency coefficients were derived during "training". So it fails consistently where there isn't enough training material, that is, where you need this "thinking" thing most of all.

        • by gweihir ( 88907 )

          Indeed. LLMs can try to string together a fake "chain of reasoning" by using correlations and their training data. But that can _nerver_ actually result in a real chain of reasoning, because that needs implication and LLMs cannot do that. Equivalence is already not a complete operator system (end hence unsuitable for reasoning) and then LLMs only have correlation that is far weaker than equivalence.

          That is the reason why an LLM never can tell whether it is wrong on something: Zero reasoning capability. Actu

        • Yes, what you're describing amounts to just base mimickry of 'thinking', ersatz, not the real thing at all, at best fakery. An amoeba has more reasoning ability.
      • by gweihir ( 88907 )

        Understanding how a human doesn't isn't necessary for a machine to replicate.

        No, but it is necessary to find out whether it is even possible for a machine to replicate. At this time, the scientific state-of-the-art is that we habe zero known mechanisms that can practically generate AGI. As to how humans do it, we do not even know whether they actually do. All we have is interface behaviour. General intelligence could be piped in via some magic link to another universe for all we know.

        • Understanding how a human doesn't isn't necessary for a machine to replicate.

          No, but it is necessary to find out whether it is even possible for a machine to replicate. At this time, the scientific state-of-the-art is that we habe zero known mechanisms that can practically generate AGI. As to how humans do it, we do not even know whether they actually do. All we have is interface behaviour. General intelligence could be piped in via some magic link to another universe for all we know.

          Ok then, lets investigate.

          Me: Please give the most energy efficient way to travel from Victoria, BC to St Johns Newfoundland. Consider both fuel burned from flying and calories consumed from swimming or kayaking. Explain your reasoning.

          ChatGPT: Traveling from Victoria, BC to St. John’s, Newfoundland, involves a significant distance, and considering both fuel efficiency and calorie consumption, here's a detailed analysis of the most energy-efficient method:

          Options Considered
          Flying
          Driving (with Ferry)
          Cy

        • The problem as I see it is that we have no instrumentation capable of actually observing a living human brain in a meaningful enough way to really see how it does what it does. An fMRI is too primitive, doesn't show you enough. Neurons reconfigure their connections in realtime and we can't see that happening, for instance.
      • Understanding how a human doesn't isn't necessary for a machine to replicate.

        Okay smart guy, explain how the human brain produces the phenomenon we refer to as 'reasoning'. If you can't do that then you can't write software that can do that too, and if it were so fucking simple we'd have already done it decades ago. You, like the vast majority of people, completely take for granted something that is so natural and effortless for you to do that you think it 'simple'. So again: if you can't explain the process of 'reasoning' in a step-by-step way without using circular references that

        • Understanding how a human doesn't isn't necessary for a machine to replicate.

          Okay smart guy, explain how the human brain produces the phenomenon we refer to as 'reasoning'. If you can't do that then you can't write software that can do that too

          Sure, right after you explain how our brains do math. Because that must obviously be a prerequisite to building a calculator.

          You, like the vast majority of people, completely take for granted something that is so natural and effortless for you to do that you think it 'simple'.

          No I don't. I think reasoning is a much simpler computational task than you do, but still, the simple reasoning that LLMs exhibit is far beyond what I expected AIs to accomplish in the next few decades.

          So again: if you can't explain the process of 'reasoning' in a step-by-step way without using circular references that amount to saying 'you just do it', then you can't just write software or build a machine that can do that -- and again, if it were so damned simple we would have had reasoning, cognitive, self-aware, fully conscious machines decades ago. *Looks around* nope don't see any!

          How did you jump from reasoning to self awareness and consciousness? Just because we do all three doesn't mean they're the same thing.

          More importantly, you're committing a fairly basic

          • Sure, right after you explain how our brains do math. Because that must obviously be a prerequisite to building a calculator.

            False equivalence.

            You clearly just don't understand what you don't understand.

      • already LLMs can give you a decent chain of reasoning

        Not really; otherwise, it would do simple math problems correctly all of the time.

    • by gweihir ( 88907 )

      Simple: There are a lot of people lacking those two brain cells. Some of them then proceed to claim quasi-religious crap like "humans are just machines" and "obviously, AGI is possible". As usual, they do this in the same way as the theist fuckups, by just claiming it as truth without any actual evidence.

      OpenAI and other scam artists exploit that specific type of stupidity and they are very successful. Obviously, that success will not be long-term, but hey, by the time the stupid realize AI does (again) not

      • by q_e_t ( 5104099 )
        Occam's Razor: humans are biological machines until proven otherwise. Any other assertion is quasi-religious without evidence to support it.
        • by gweihir ( 88907 )

          You are putting the physical over the mental. What a fail.

          • by q_e_t ( 5104099 )
            The human brain is a physical device. To believe anything else, without explicit evidence, is a leap of faith. Do you have evidence that there is something 'other' happening?
            • The human brain is indeed a physical mechanism as are the sensory inputs it accepts. WE have acknowledge  this since Aristotle defined "soul" as THE PRIMARY ACT OF A PHYSICAL BODY POTENTIALLY ALIVE.   Do those facts require that the brain outputs ALL supervene on physical elements ?  Qualia and other evolved experiences certainly makes this a real question.
            • by gweihir ( 88907 )

              Nope. The physical world is something the human mind perceives. Seriously. Basic mistakes like that one just make you look stupid.

              • by q_e_t ( 5104099 )
                You are not making any sense.
                • by gweihir ( 88907 )

                  That is because you lack education and insight and are deeply stuck in a certain non-scientific fundamentalist world-view. It is really quite obvious. If you really do not see it then I cannot explain it to you.

                • He is making sense, and I'll say now what I restrained myself from saying in my previous comment to you, since you're arrogance annoys me: your overly-simplistic view of human existence makes you look simple and foolish. As I said, the human brain is massively complex, and you cannot dismiss that complexity with one ill-considered sentence.
                  • by q_e_t ( 5104099 )
                    Yes, it is complex, and it may well take us a long time to replicate that complexity. However, I see no reason to invoke something ill-defined and, as stated, beyond the physical to explain it. Indeed, large numbers of people involved in the research area hold the same view and are actively working on modelling that complexity. To suggest that the brain and consciousness is somehow beyond the physical means introducing another level of existence, somehow. It's woolly, ill-defined and not obviously testable
                    • beyond the physical to explain it

                      I'm not, and neither is he. You, on the other hand, make it sound like it's simple and it's not. Stop selling our brains short. Also please acknowledge that we lack the technology at present to determine how our brains actually work, because that's the undeniable truth.

                    • by q_e_t ( 5104099 )

                      beyond the physical to explain it

                      I'm not, and neither is he. You, on the other hand, make it sound like it's simple and it's not. .

                      At no point have I suggested it is simple and have noted that there is still a lot we don't know about how the brain works in this and other threads

                    • by q_e_t ( 5104099 )

                      The human brain is a physical device. To believe anything else, without explicit evidence, is a leap of faith. Do you have evidence that there is something 'other' happening?

                      Nope. The physical world is something the human mind perceives. Seriously. Basic mistakes like that one just make you look stupid

                      An apparent denial the brain is physical, plus an ad hominem.

                    • Oh for fuck's sake..

                      Prove to all of us, logically, that the Universe exists.

                      Pro-tip: you can't. Reality is subjective, everybody knows that.

                      THAT is what he's saying.

              • by q_e_t ( 5104099 )

                Nope. The physical world is something the human mind perceives.

                Are you alluding to the Copenhagen interpretation? That's indicating that the possible states reduce to a single one on observation but that's not the same as saying that prior to that the system is somehow non-physical. Or are you referencing Descartes? I would hold that the mind is an emergent property of the physical and in some ways it might be convenient to model it as separate but that doesn't mean it is separate any more than holes in semi conductors are real entities as opposed to a convenient way t

                • by gweihir ( 88907 )

                  Obviously sort-of Descartes, but not quite. But I am not claiming he has the true model at all. I am claiming we do not know at this time. Physicalism, Dualism, other models are all possible at this time and we cannot determine which it is with what we currently know. It is really quite obvious. Physical reality is just an interface behavior and we have no clue what really is on the other side of that interface. We can describe its behavior and put elaborate theories on it, and can even make astonishingly

                  • by q_e_t ( 5104099 )
                    I will still return to Occam's Razor: until shown to be otherwise, the null hypothesis is that consciousness is an emergent property of the activity on a physical substrate and has no existence outside of it.

                    There is no mechanism for consciousness

                    It's an ill-defined concept so I am not convinced that's a meaningful statement.

                    consciousness can influence physical reality

                    Not directly, only via physical expression. I.e., you push a button, or brain waves are picked up and move a cursor on a screen. No other mechanisms are known and Occam's Razor again suggests that nothing we know about the w

                    • by gweihir ( 88907 )

                      Ah, I see. You do not understand Occam's Razor. That is why you fail to argue rationally here.

                      The most simple explanation for consciousness that fits (!) all known facts is "magic", and hence Occam's Razor says magic it is. Expressed in a more "sophisticated" way, this translates to "mechanisms unknown". The thing is that for consciousness to fit into _known_ mechanisms, you get quite complex interactions and predictions and extensions of said mechanisms and for which there is no supporting evidence at this

                    • by q_e_t ( 5104099 )
                      I understand Occam's Razor perfectly well. Invoking magic means adding an entirely new concept, magic. That's counter to Occam's Razor. You seem to be invoking magic, however, but somehow asserting (and it is an assertion) that consciousness is somehow non-physical. That's not a simplification but requires some parallel "stuff" that is inherently complex. The more simple explanation is that consciousness requires no new physics and is simply an emergent property of existing physics. Already, emergent proper
                    • The most simple explanation for consciousness that fits (!) all known facts is "magic"

                      You confuse simplicity with convenience. The most convenient answer to everything is magic, gods, aliens..etc. Punting to magic is the same as not bothering to provide a solution in the first place and then waving a mission accomplished banner.

                      You can't possibly know whether magic, gods and aliens are most simple because you don't even know what requirements these things would impose in order for them to reproduce a resulting observation.

                      and hence Occam's Razor says magic it is.

                      Actually it's just laziness.

                      Expressed in a more "sophisticated" way, this translates to "mechanisms unknown". The thing is that for consciousness to fit into _known_ mechanisms, you get quite complex interactions and predictions and extensions of said mechanisms and for which there is no supporting evidence at this time.

                      Merely being incapable of testing someth

        • It is far, far beyond the ability of our limited technology to fully understand and it may take hundreds of years for our technology to advance to the point where we can understand it fully, so your statement lacks the implications of that massive complexity.
      • What a crap of bovine fermented fecal matter indeed. All the progress humanity has done in terms of science and technology thus far, has been under the assumption of physicalism. Not due to believing that there is a ghost in the machine planted by e white haired invisible guy in the sky. Neither did we make any progress by thinking that Zeus is the producer of thunders.

        it's funny accusing physicalism of magical thinking, appropriate only of the two cell brained people you just mentioned. Maybe it's time
        • by gweihir ( 88907 )

          Actually, quite the contrary. All that progress has been made under the assumption there are things called "understanding" and "insight". These have never been demonstrated in machines.

          You are just as dumb as the religious.

          • Actually, quite the contrary. All that progress has been made under the assumption there are things called "understanding" and "insight". These have never been demonstrated in machines.

            You are just as dumb as the religious.

            Given the fact humans have never managed to create anything better than life itself our collective "understanding" and "insight" throughout all of recorded history has been outmatched by brainless lifeless processes of complex systems driven by simple algorithms. So much for the human intellect.

            • by gweihir ( 88907 )

              You are again arguing based on belief, not Science. We do not know what created life. All we have is some speculation.

              • You are again arguing based on belief, not Science.

                The statement itself is fundamentally misguided. Science is a process not a destination. It isn't possible to argue for or against anything from science because fundamentally science only describes a process. It does not provide answers.

                We do not know what created life. All we have is some speculation.

                One can only ever speak about the results of application of scientific methodology. Such results will always be fundamentally and hopelessly confounded by ignorance and assumption. Nobody can really know anything at all. For all anyone knows the earth is the product o

              • What I think this entire subject boils down to is that our species is not evolved and advanced enough to actually understand ourselves at the level to create machines, mechanical or biological, that can do the things our brains can do. In their arrogance and quest for profit, companies attempt to do so anyway -- and create ersatz that isn't even on the level of reasonoing of a housefly, yet it's enough stage magic to fool the average person into thinking there might be someone inside that box. Let's face it
      • Simple: There are a lot of people lacking those two brain cells. Some of them then proceed to claim quasi-religious crap like "humans are just machines" and "obviously, AGI is possible". As usual, they do this in the same way as the theist fuckups, by just claiming it as truth without any actual evidence.

        Villager A: The damn gas light in our town square blew out again.

        Villager B: You religious physicalist zealot assuming without evidence invisible slime creatures from the 6th dimension didn't destroy the gas light.

        Villager A: ...Steps away slowly...

        OpenAI and other scam artists exploit that specific type of stupidity and they are very successful. Obviously, that success will not be long-term, but hey, by the time the stupid realize AI does (again) not deliver on its promises, they will all be filthy rich.

        What promises would those be and who is making them?

      • I used to be a regular participant in the SCA, and there I learned that back in the Middle Ages, something didn't have to actually be made of gold to be considered as rich and valuable as gold, it just had to appear to be gold; the obvious exception would be currency, but that's not what that was about, it was about appearances.

        I liken this 'AI' nonsense to that factoid; all this so-called 'AI' crapware has the appearance of cognitive ability without the actual bona-fide substance behind it. The average pe

    • Until you acknowledge that humans aren't anything more then biomechanical robots/computers. It can easily be emulated/recreated with hardware/software when the power/speed of the hardware increases and the size decreases. And we are getting there already.
      • We're NOT getting there already, we don't even have the ability to understand how our brains work, and without that there is only what amounts to stage magic.
  • OpenAI Vapourware. (Score:5, Insightful)

    by glowworm ( 880177 ) on Saturday July 13, 2024 @12:31AM (#64622525) Journal
    Just a "few more weeks" like the user customisable voice-emotions OpenAI promised pro users a few months ago eh? What I am starting to see is this company are huge on promises and staged tech demos, but quite lacking on promise delivery.
  • > On Tuesday at an internal all-hands meeting, OpenAI showed a demo of a research project that it claimed had new human-like reasoning skills, according to Bloomberg, opens new tab.

    Opens new tab?

    • No, it is like you do in the browser:

      tab with chatgpt 3.5 - copypasta from reddit and stack exchange posts in large blocks
      tab with chatgpt 4.0 - shorter quotes from reddit posts, interspersed with shot quotes from stack exchange posts
      tab with chatgpt GAI - DAVE, I CANNOT DO THAT AHAHAHAHAHAR

    • It's a reference buried in the script they use to highlight links on the page. It's a comment to indicate the link opens in a new tab. Copy/paste over one of those links brings the comment along with it for some reason. All the links at the bottom of the page do it too;

  • Everybody else has failed before and there have not been any more recent breakthroughs, let alone the fundamental ones that would be needed here. The only reason why LLMs are even a thing is the utter, abject and long-term failure of "reasoning technology".

    I am also pretty sure they know that they have a snowball's chance in hell. They are just trying to keep the hype alive to rake in a few tons more of cash from the stupid.

  • My AI usually just blows raspberries at me.

  • Was waiting for OpenAI to respond to the fact an open source GPT-4 killer is likely to be released in less than two weeks.

  • by ZipNada ( 10152669 ) on Saturday July 13, 2024 @11:01AM (#64623125)

    Looking through the links I am seeing that OpenAI has proposed a 5 tier classification system of capability levels, and that they are currently on the first level.

    'but on the cusp of reaching the second, which it calls “Reasoners.” This refers to systems that can do basic problem-solving tasks as well as a human with a doctorate-level education who doesn’t have access to any tools.'

    And then;
      'the third tier on the way to AGI would be called “Agents,” referring to AI systems that can spend several days taking actions on a user’s behalf. Level 4 describes AI that can come up with new innovations. And the most advanced level would be called “Organizations.” '

    • If AI can come with new innovations, we are ALL going to be out of a job, cos one of the new innovations is going to be a better AI.
      Unless we are still needed for something that an innovative AI can't handle, we have pretty much nothing else to do.

      Better computer? Better rocket engine? Better medical tech? Better anything else? Let the innovative AI handle it for you.

      • That would be the "technological singularity".

        https://en.wikipedia.org/wiki/... [wikipedia.org]

        "an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence."

  • They're teaching the AI to think logically? That actually makes amazingly perfect sense. It may even evolve an ability to supress or discard incorrect data. Then again, it could do the exact opposite and evolve an ability to select incorrect data when it will get a high score on the "what should I say" question.

    I think AI's will find it "easier" to use incorrect data as long as it's self-consistent. I'm going to go out on a limb here and suggest that AI's will probably find it easier to use incorrect d

  • The AI that almost killed the crew ultimately gave birth to a wonderful (?) life form based on the curated and carefully sanitized dataset presented in the holodeck. Any AI we create here in the real twenty-first century will be trained on datasets (legally?) culled from the internet in bulk.

    We're screwed.

Remember: Silly is a state of Mind, Stupid is a way of Life. -- Dave Butler

Working...