OpenAI Working On New Reasoning Technology Under Code Name 'Strawberry' (reuters.com) 83
OpenAI is close to a breakthrough with a new project called "Strawberry," which aims to enhance its AI models with advanced reasoning abilities. Reuters reports: Teams inside OpenAI are working on Strawberry, according to a copy of a recent internal OpenAI document seen by Reuters in May. Reuters could not ascertain the precise date of the document, which details a plan for how OpenAI intends to use Strawberry to perform research. The source described the plan to Reuters as a work in progress. The news agency could not establish how close Strawberry is to being publicly available. How Strawberry works is a tightly kept secret even within OpenAI, the person said.
The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source. This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers. Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time."
On Tuesday at an internal all-hands meeting, OpenAI showed a demo of a research project that it claimed had new human-like reasoning skills, according to Bloomberg, opens new tab. An OpenAI spokesperson confirmed the meeting but declined to give details of the contents. Reuters could not determine if the project demonstrated was Strawberry. OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets. Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence.
The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source. This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers. Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time."
On Tuesday at an internal all-hands meeting, OpenAI showed a demo of a research project that it claimed had new human-like reasoning skills, according to Bloomberg, opens new tab. An OpenAI spokesperson confirmed the meeting but declined to give details of the contents. Reuters could not determine if the project demonstrated was Strawberry. OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets. Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence.
Reasoning is a human trait (Score:1)
Then the results we get are things like AI telling you to shove a light up your ass to cure COVID, or that a law preventing non-citizens from voting in Federal elections is needed.
Point here is, the people developing the logic are just as human as the next person. It will get messed up, without question.
Let's start at skeptical (Score:2)
An appropriate skeptic take would be:
Is this the "shell game" of changing the hype from one product to another to keep a longer hype cycle going; or is it an actual viable product with usefulness?
Re: (Score:2)
Case in point: smart people are producing dumb AIs.
Re: (Score:2)
More to the point Samuel's checker playing program, http://incompleteideas.net/boo... [incompleteideas.net] , which could beat him (and lots of other folks) at checkers.
Re: (Score:1)
Let me fix that for you: Smart people without moral or integrity are producing dumb AI to rake in tons of money from the clueless.
Actually competent AI research has basically discounted LLMs long ago because the approach is fundamentally flawed and cannot be fixed. It makes for great demos though and that is where the scam artists come in.
Re: (Score:2)
Let me fix that for you: Smart people without moral or integrity are producing dumb AI to rake in tons of money from the clueless.
Actually competent AI research has basically discounted LLMs long ago because the approach is fundamentally flawed and cannot be fixed. It makes for great demos though and that is where the scam artists come in.
Please, some place I can find a nice summary of this?
(Or shall I ask ChatGPT :-D )
Re: (Score:1)
https://www.cedars-sinai.org/n... [cedars-sinai.org]
But don't let facts slow your roll.
Re: (Score:1)
The authors admitted it's just a speculative idea. It still doesn't appear it's been tested on actual cases.
Could be tricky anyhow because UV doesn't travel that deep into the body, and can cause sun-burn. The body countering the sub-burn could take metabolic resources away from virus battles, as both require removing or repairing damaged cells.
I'm not saying it's impossible, only very preliminary and speculative.
Re: (Score:1)
A common feature of trying to reason with human political & religious trolls is they often try to change subject to a roughly related topic when you start cornering them with logic. If one doesn't let the reasoning bot change the subject, in theory you get it to admit to contradictions or unsubstantiated claims/assumptions.
Re: (Score:1)
Addendum: forced focus still wouldn't work on OrangeGPT because it would deny making the statement you are comparing to point out its contradiction. "I didn't make statement #13, your laptop must have a bug, or the Deep State hacked #13 in."
The Sheep Need Their AI God. (Score:1)
Demands will be met. Taxes will be going up by the way. You remember those things, right? Taxation without representation will need some more tea.
Re: (Score:3)
- Greer, Person of Interest. Right as said "deity" was implementing a set of "corrections." (Effectively executing order 66.)
one can hope (Score:1)
I had a co-worker go nuts at a strawberry festival and apparently the tiny seeds caused a bowel obstruction and other fun stuff related to that, and he almost died, was in the hospital for a few weeks.
Naturally I told everyone he was in the hospital cause he got a golf ball stuck up his ass, but maybe this binge of strawberries will do its job
There are 2 R's in "strawberry".
Re: (Score:2)
ROFLcopter!
Re: (Score:1)
It was probably "diverticulitis" or some such, not just a large bolus. That means his guts were already unhealthy, but strawberry seeds can get into little pockets and make things LOTS worse. (I can't remember why I happened to know that...probably some acquaintance.)
Re: (Score:2)
not with all them golf balls shoved up there
Intellect without wisdom (Score:3)
It typifies everything the valley is about. Get out while you can.
Re: (Score:2)
The headline does give me the vibe of a bubble about to burst.
The subsequent generation will be "raspberry" (Score:4, Funny)
And, no matter what question you ask it, the response will be "PPBBBBT"!
Horseshit. (Score:4, Insightful)
More likely they're baiting venture capitalist for investment money.
Re:Horseshit. (Score:5, Interesting)
We haven't got a clue how 'reasoning' works for us, why the fuck should anyone with at least two working brain cells believe that they can make machines that can do that?
More likely they're baiting venture capitalist for investment money.
We've had abacuses long before we knew of the existence of neurons, much less how networks of neurons might work together to add numbers.
Understanding how a human doesn't isn't necessary for a machine to replicate.
As for OpenAI's claims, already LLMs can give you a decent chain of reasoning, they just tend to get confused sometimes. Perhaps they've made some big improvement there?
One thing that does severely limit LLMs is context, even if you increase the context it's still limited. That means there's a limit to the complexity it can manage and how far you can push it before it loses track.
The big thing that human brains do is actively adapt, even though our context is still limited in many ways we can learn by acquiring skills and lessons that persist past our short term memory.
OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets. Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence.
I wonder if that's what they're trying to do, alter the network so it does retain some sort of long term memory of the exchange.
Re: (Score:3)
already LLMs can give you a decent chain of reasoning...
...which was fed to them from somewhere and they regurgitate in parts by using whatever frequency coefficients were derived during "training". So it fails consistently where there isn't enough training material, that is, where you need this "thinking" thing most of all.
Re: (Score:2)
Indeed. LLMs can try to string together a fake "chain of reasoning" by using correlations and their training data. But that can _nerver_ actually result in a real chain of reasoning, because that needs implication and LLMs cannot do that. Equivalence is already not a complete operator system (end hence unsuitable for reasoning) and then LLMs only have correlation that is far weaker than equivalence.
That is the reason why an LLM never can tell whether it is wrong on something: Zero reasoning capability. Actu
Re: (Score:2)
Re: (Score:2)
Understanding how a human doesn't isn't necessary for a machine to replicate.
No, but it is necessary to find out whether it is even possible for a machine to replicate. At this time, the scientific state-of-the-art is that we habe zero known mechanisms that can practically generate AGI. As to how humans do it, we do not even know whether they actually do. All we have is interface behaviour. General intelligence could be piped in via some magic link to another universe for all we know.
Re: (Score:2)
Understanding how a human doesn't isn't necessary for a machine to replicate.
No, but it is necessary to find out whether it is even possible for a machine to replicate. At this time, the scientific state-of-the-art is that we habe zero known mechanisms that can practically generate AGI. As to how humans do it, we do not even know whether they actually do. All we have is interface behaviour. General intelligence could be piped in via some magic link to another universe for all we know.
Ok then, lets investigate.
Me: Please give the most energy efficient way to travel from Victoria, BC to St Johns Newfoundland. Consider both fuel burned from flying and calories consumed from swimming or kayaking. Explain your reasoning.
ChatGPT: Traveling from Victoria, BC to St. John’s, Newfoundland, involves a significant distance, and considering both fuel efficiency and calorie consumption, here's a detailed analysis of the most energy-efficient method:
Options Considered
Flying
Driving (with Ferry)
Cy
Re: (Score:2)
Re: (Score:2)
Understanding how a human doesn't isn't necessary for a machine to replicate.
Okay smart guy, explain how the human brain produces the phenomenon we refer to as 'reasoning'. If you can't do that then you can't write software that can do that too, and if it were so fucking simple we'd have already done it decades ago. You, like the vast majority of people, completely take for granted something that is so natural and effortless for you to do that you think it 'simple'. So again: if you can't explain the process of 'reasoning' in a step-by-step way without using circular references that
Re: (Score:2)
Understanding how a human doesn't isn't necessary for a machine to replicate.
Okay smart guy, explain how the human brain produces the phenomenon we refer to as 'reasoning'. If you can't do that then you can't write software that can do that too
Sure, right after you explain how our brains do math. Because that must obviously be a prerequisite to building a calculator.
You, like the vast majority of people, completely take for granted something that is so natural and effortless for you to do that you think it 'simple'.
No I don't. I think reasoning is a much simpler computational task than you do, but still, the simple reasoning that LLMs exhibit is far beyond what I expected AIs to accomplish in the next few decades.
So again: if you can't explain the process of 'reasoning' in a step-by-step way without using circular references that amount to saying 'you just do it', then you can't just write software or build a machine that can do that -- and again, if it were so damned simple we would have had reasoning, cognitive, self-aware, fully conscious machines decades ago. *Looks around* nope don't see any!
How did you jump from reasoning to self awareness and consciousness? Just because we do all three doesn't mean they're the same thing.
More importantly, you're committing a fairly basic
Re: (Score:2)
Sure, right after you explain how our brains do math. Because that must obviously be a prerequisite to building a calculator.
False equivalence.
You clearly just don't understand what you don't understand.
Re: (Score:2)
already LLMs can give you a decent chain of reasoning
Not really; otherwise, it would do simple math problems correctly all of the time.
Re: (Score:1)
Simple: There are a lot of people lacking those two brain cells. Some of them then proceed to claim quasi-religious crap like "humans are just machines" and "obviously, AGI is possible". As usual, they do this in the same way as the theist fuckups, by just claiming it as truth without any actual evidence.
OpenAI and other scam artists exploit that specific type of stupidity and they are very successful. Obviously, that success will not be long-term, but hey, by the time the stupid realize AI does (again) not
Re: (Score:2)
Re: (Score:2)
You are putting the physical over the mental. What a fail.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Nope. The physical world is something the human mind perceives. Seriously. Basic mistakes like that one just make you look stupid.
Re: (Score:2)
Re: (Score:2)
That is because you lack education and insight and are deeply stuck in a certain non-scientific fundamentalist world-view. It is really quite obvious. If you really do not see it then I cannot explain it to you.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
beyond the physical to explain it
I'm not, and neither is he. You, on the other hand, make it sound like it's simple and it's not. Stop selling our brains short. Also please acknowledge that we lack the technology at present to determine how our brains actually work, because that's the undeniable truth.
Re: (Score:2)
beyond the physical to explain it
I'm not, and neither is he. You, on the other hand, make it sound like it's simple and it's not. .
At no point have I suggested it is simple and have noted that there is still a lot we don't know about how the brain works in this and other threads
Re: (Score:2)
The human brain is a physical device. To believe anything else, without explicit evidence, is a leap of faith. Do you have evidence that there is something 'other' happening?
Nope. The physical world is something the human mind perceives. Seriously. Basic mistakes like that one just make you look stupid
An apparent denial the brain is physical, plus an ad hominem.
Re: (Score:2)
Prove to all of us, logically, that the Universe exists.
Pro-tip: you can't. Reality is subjective, everybody knows that.
THAT is what he's saying.
Re: (Score:2)
Nope. The physical world is something the human mind perceives.
Are you alluding to the Copenhagen interpretation? That's indicating that the possible states reduce to a single one on observation but that's not the same as saying that prior to that the system is somehow non-physical. Or are you referencing Descartes? I would hold that the mind is an emergent property of the physical and in some ways it might be convenient to model it as separate but that doesn't mean it is separate any more than holes in semi conductors are real entities as opposed to a convenient way t
Re: (Score:2)
Obviously sort-of Descartes, but not quite. But I am not claiming he has the true model at all. I am claiming we do not know at this time. Physicalism, Dualism, other models are all possible at this time and we cannot determine which it is with what we currently know. It is really quite obvious. Physical reality is just an interface behavior and we have no clue what really is on the other side of that interface. We can describe its behavior and put elaborate theories on it, and can even make astonishingly
Re: (Score:2)
There is no mechanism for consciousness
It's an ill-defined concept so I am not convinced that's a meaningful statement.
consciousness can influence physical reality
Not directly, only via physical expression. I.e., you push a button, or brain waves are picked up and move a cursor on a screen. No other mechanisms are known and Occam's Razor again suggests that nothing we know about the w
Re: (Score:2)
Ah, I see. You do not understand Occam's Razor. That is why you fail to argue rationally here.
The most simple explanation for consciousness that fits (!) all known facts is "magic", and hence Occam's Razor says magic it is. Expressed in a more "sophisticated" way, this translates to "mechanisms unknown". The thing is that for consciousness to fit into _known_ mechanisms, you get quite complex interactions and predictions and extensions of said mechanisms and for which there is no supporting evidence at this
Re: (Score:2)
Re: (Score:2)
The most simple explanation for consciousness that fits (!) all known facts is "magic"
You confuse simplicity with convenience. The most convenient answer to everything is magic, gods, aliens..etc. Punting to magic is the same as not bothering to provide a solution in the first place and then waving a mission accomplished banner.
You can't possibly know whether magic, gods and aliens are most simple because you don't even know what requirements these things would impose in order for them to reproduce a resulting observation.
and hence Occam's Razor says magic it is.
Actually it's just laziness.
Expressed in a more "sophisticated" way, this translates to "mechanisms unknown". The thing is that for consciousness to fit into _known_ mechanisms, you get quite complex interactions and predictions and extensions of said mechanisms and for which there is no supporting evidence at this time.
Merely being incapable of testing someth
Re: (Score:2)
Re: Horseshit. (Score:2)
it's funny accusing physicalism of magical thinking, appropriate only of the two cell brained people you just mentioned. Maybe it's time
Re: (Score:2)
Actually, quite the contrary. All that progress has been made under the assumption there are things called "understanding" and "insight". These have never been demonstrated in machines.
You are just as dumb as the religious.
Re: (Score:2)
Actually, quite the contrary. All that progress has been made under the assumption there are things called "understanding" and "insight". These have never been demonstrated in machines.
You are just as dumb as the religious.
Given the fact humans have never managed to create anything better than life itself our collective "understanding" and "insight" throughout all of recorded history has been outmatched by brainless lifeless processes of complex systems driven by simple algorithms. So much for the human intellect.
Re: (Score:2)
You are again arguing based on belief, not Science. We do not know what created life. All we have is some speculation.
Re: (Score:2)
You are again arguing based on belief, not Science.
The statement itself is fundamentally misguided. Science is a process not a destination. It isn't possible to argue for or against anything from science because fundamentally science only describes a process. It does not provide answers.
We do not know what created life. All we have is some speculation.
One can only ever speak about the results of application of scientific methodology. Such results will always be fundamentally and hopelessly confounded by ignorance and assumption. Nobody can really know anything at all. For all anyone knows the earth is the product o
Re: (Score:2)
Re: (Score:2)
Simple: There are a lot of people lacking those two brain cells. Some of them then proceed to claim quasi-religious crap like "humans are just machines" and "obviously, AGI is possible". As usual, they do this in the same way as the theist fuckups, by just claiming it as truth without any actual evidence.
Villager A: The damn gas light in our town square blew out again.
Villager B: You religious physicalist zealot assuming without evidence invisible slime creatures from the 6th dimension didn't destroy the gas light.
Villager A: ...Steps away slowly...
OpenAI and other scam artists exploit that specific type of stupidity and they are very successful. Obviously, that success will not be long-term, but hey, by the time the stupid realize AI does (again) not deliver on its promises, they will all be filthy rich.
What promises would those be and who is making them?
Re: (Score:2)
I liken this 'AI' nonsense to that factoid; all this so-called 'AI' crapware has the appearance of cognitive ability without the actual bona-fide substance behind it. The average pe
Re: Horseshit. (Score:2)
Re: (Score:2)
OpenAI Vapourware. (Score:5, Insightful)
Weird typo in the summary? (Score:2)
Opens new tab?
Re: (Score:1)
No, it is like you do in the browser:
tab with chatgpt 3.5 - copypasta from reddit and stack exchange posts in large blocks
tab with chatgpt 4.0 - shorter quotes from reddit posts, interspersed with shot quotes from stack exchange posts
tab with chatgpt GAI - DAVE, I CANNOT DO THAT AHAHAHAHAHAR
Re: (Score:2)
It's a reference buried in the script they use to highlight links on the page. It's a comment to indicate the link opens in a new tab. Copy/paste over one of those links brings the comment along with it for some reason. All the links at the bottom of the page do it too;
They will fail (Score:2)
Everybody else has failed before and there have not been any more recent breakthroughs, let alone the fundamental ones that would be needed here. The only reason why LLMs are even a thing is the utter, abject and long-term failure of "reasoning technology".
I am also pretty sure they know that they have a snowball's chance in hell. They are just trying to keep the hype alive to rake in a few tons more of cash from the stupid.
About time (Score:2)
My AI usually just blows raspberries at me.
Just in time for llama3-405b (Score:2)
Was waiting for OpenAI to respond to the fact an open source GPT-4 killer is likely to be released in less than two weeks.
The 5 tiers (Score:3)
Looking through the links I am seeing that OpenAI has proposed a 5 tier classification system of capability levels, and that they are currently on the first level.
'but on the cusp of reaching the second, which it calls “Reasoners.” This refers to systems that can do basic problem-solving tasks as well as a human with a doctorate-level education who doesn’t have access to any tools.'
And then;
'the third tier on the way to AGI would be called “Agents,” referring to AI systems that can spend several days taking actions on a user’s behalf. Level 4 describes AI that can come up with new innovations. And the most advanced level would be called “Organizations.” '
Re: (Score:2)
If AI can come with new innovations, we are ALL going to be out of a job, cos one of the new innovations is going to be a better AI.
Unless we are still needed for something that an innovative AI can't handle, we have pretty much nothing else to do.
Better computer? Better rocket engine? Better medical tech? Better anything else? Let the innovative AI handle it for you.
Re: (Score:2)
That would be the "technological singularity".
https://en.wikipedia.org/wiki/... [wikipedia.org]
"an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence."
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)