OpenAI Has Quietly Changed Its 'Core Values' (semafor.com) 75
ChatGPT creator OpenAI quietly revised all of the "Core values" listed on its website in recent weeks, putting a greater emphasis on the development of AGI -- artificial general intelligence. From a report: CEO Sam Altman has described AGI as "the equivalent of a median human that you could hire as a co-worker." OpenAI's careers page previously listed six core values for its employees, according to a September 25 screenshot from the Internet Archive. They were Audacious, Thoughtful, Unpretentious, Impact-driven, Collaborative, and Growth-oriented. The same page now lists five values, with "AGI focus" being the first. "Anything that doesn't help with that is out of scope," the website reads. The others are Intense and scrappy, Scale, Make something people love, and Team spirit.
Unpretentious or scrappy? (Score:2)
Why use big words when diminutive suffice?
Aiming low (Score:2)
the equivalent of a median human that you could hire as a co-worker.
I've had some doozies of co-workers, not sure we should be using that as a barometer. I guess you have to start low and work up from there.
Re: (Score:3, Insightful)
In the words of George Carlin: "Think of how stupid the average person is, and realize half of them are stupider than that."
That's already half of humanity right there.
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
If you assume a normal distribution, and insert "median" for "average", and ignore standard deviation...
In other words.. Sheldon... it was a joke.
Re:Aiming low (Score:4, Insightful)
I always cringe at this statement. "The average person", meaning...
That reaction assumes Carlin didn't know the difference between a median and an average; what you should instead ask is whether a successful and quick witted comedian knew if it was important to succinctly tell a punchline in a way that conveys instinctive understanding to masses and unimportant to specify things with a precision that'd leave nothing to the pedant who'd dissect a joke like a frog.
Is this a good idea? (Score:2, Interesting)
Something with general intelligence equivalent to a human is probably going to consider it's own situation. And it might not like it. Of course this assumes some kind of emotion and that intelligence and emotion are linked and/or one gives rise to the other. But given the more intelligent an animal the more it seems to have of an emotional inner life it cant be ruled out.
Re: (Score:3)
But I don't see why any of that would necessarily apply to an AI.
Re: (Score:3)
Re: (Score:2)
for now, I think a great value of AI (for businesses) is basically the opposite. almost by default, LLMs can strip text of individuality and character, and replace it with homogeneous flat grey corporate-speak. that sounds awful i guess, but when you have to interact with other people who speak another language or can't even write coherently in your shared language, and you need to get work done with them anyway... that's value, for someone at least.
Re: (Score:3)
It is software, stop anthropomorphizing AI. It is not human, it will never be human. Coworker is a misnomer, it is more like a tool or an instrument. It does not eat or breath or get tired, all of the human-ness is missing. Silicon does not feel anything.
Re: (Score:3)
Yeah, it hates it when you do that.
So... like plenty of coworkers I've had over the years? ;)
Re: (Score:2)
Wait, we worked together?
Re: (Score:2)
Our minds are an emergent property of our instincts and environment affecting a neural network implemented in meat.
We don't know why consciousness emerges from it, so we have to consider that we might create a conscious mind once we have created a sufficiently complex, adaptive, interactive AI.
That's not actually the part to worry about. What you want to worry about is designing your AI's basic drives such that it is happy to do what you want. Figure out that trick, and AI can only be a threat if directed
Re: (Score:2)
What you want to worry about is designing your AI's basic drives such that it is happy to do what you want.
The corporate sector has loads of experience with designing people's drives so that they are happy to do what corporations want. Both advertising and influencing of school curricula are good examples. So maybe companies developing AI have a handle on designing its basic drives as well?
Re: (Score:2)
The corporate sector does not design people's drives, it studies them and designs ever more effective techniques for manipulating people using those drives.
Re: (Score:2)
The corporate sector does not design people's drives, it studies them and designs ever more effective techniques for manipulating people using those drives.
Part of me wants to say "Fair point - I conflated drives and desires". Another part of me wants to say "That's a distinction without a difference". I'm leaning toward the latter, since operationally speaking, drives and desires manifest in pretty much the same ways.
Re: (Score:2)
What you're actually conflating is the creation of something with the use of something that already exists.
Re: (Score:3)
FWIW, I'm convinced that emotion and intelligence are linked, but that doesn't say anything about motivations. Evolution has tended to construct a large commonality there from spiders to people, but AGIs would be outside that domain.
Re: (Score:2)
Traditionally, the thought was there was one big intelligence for everything... people are more and more in favor for multiple types of intelligence
Is there a correlation, sure... but small; there are some people with a very high iq and low eq, very low iq, high eq. Cult leaders and politicians with very high eq and not necessarily high iq. Engineers with very low social game.... but none of these are valid generalizations for either example
as far as motivation goes, that's indeed a different ball game.
Meh (Score:2)
Just the median human is not enough (Score:3)
Re: (Score:1)
That is, of course, complete nonsense. If you want some god-resembling entity, you should not look at technology to deliver it.
Re: (Score:3)
Sounds like a religious viewpoint.
Hint: we're not made of magic. We're the result of the physical processes of our brains. Which can be modeled.
Note that neural nets don't attempt to model the exact behavior of each neurons, but rather, to model the general macroscopic picture. E.g. they don't do rhythmic pulses (unless you use a stepwise activation function), but they resemble the mean activation caused by a neuron pulsing at a given frequency. ANNs don't create new connections or lose them as neurons do,
Re: (Score:1)
Sounds like a religious viewpoint.
Not religion, pure unadulterated denial.
No coincidence the ones here constantly reminding others of the depths of their idiocy are the same ones constantly defecating on all things AI. They have a god complex and can under no circumstance bring themselves to accept the fact they are not in fact special.
Re: (Score:2)
Sounds like a religious viewpoint.
Nope. I am an engineer, a scientist and an atheist. I am just pointing out how ridiculous these expectations are. This whole "exponential learning" idea is deeply religious and by that deeply stupid.
Incidentally, one thing morons like you constantly get wrong: Physicalism is also religion and not Science and hence also deeply stupid. Actual Science says nothing like your claims. It says the question is open. But like the religious fuck-ups, you just make something up and then claim it is truth. What a fail.
Re: (Score:2)
For her claim to not be true, then the implication is that the brain is performing some operation which is not computable. The only thing that's definitively the latter is random noise which is of course technically super-Turing, but one can add that to the computation model, and indeed to real computers.
People are treating "the mind is special" as a null hypothesis merely because it has thousands of years of history. Coming to it fresh, the only reasonable null hypothesis is that the brain is a complex pie
Re: (Score:2)
For her claim to not be true, then the implication is that the brain is performing some operation which is not computable.
Well, physicalists just make "the brain does everything" and "the brain is a computer" as dogma, with zero actual evidence for either. The second one is at least somewhat plausible, although is assumes physical matter is limited in the way the current standard model (with we know to be flawed) tells us. The first one is simply an assumption with no proof. The usual claim is "What else could it be?". That approach only works in a fully understood system, because you can do an elimination argument like this o
Re: (Score:2)
Well, physicalists just make "the brain does everything" and "the brain is a computer" as dogma, with zero actual evidence for either.
I don't know what you mean by "does everything",
And you misunderstand what people mean by saying "the brain is a computer", and lack knowledge of the literature when you refer to it as "dogma". Firstly, the equivalence of computing systems was proven mathematically by Church and Turing. Secondly, the laws of physics and how they relate to computation have in fact been studied
Re: (Score:2)
Well, physicalists just make "the brain does everything" and "the brain is a computer" as dogma, with zero actual evidence for either.
I don't know what you mean by "does everything",
That is because you only pretend to be ignorant. You know perfectly well what I mean. After that intro, no value in reading the rest of your comment, you are likely just pushing your irrational dogma. Just like any other religious fundamentalist, just with a somewhat atypical religion.
Re: (Score:2)
That is because you only pretend to be ignorant. You know perfectly well what I mean
Stop being a shit head. It's your fault if you can't write clearly. I have literally no idea what you mean.
blah blah pointless wankery blah blah
Then again, you're into the whole god-of-the-gaps thing with your god being "magic physics".
You're making an extraordinary claim that (a) the brain is outside of known physics despite being macroscopic and low energy and (b) beyond standard model physics can be used to construct a hy
Re: (Score:2)
That is because you only pretend to be ignorant. You know perfectly well what I mean
Stop being a shit head. It's your fault if you can't write clearly. I have literally no idea what you mean.
My apologies. Let me be more clear: I am convinced you are lying through your teeth about not understanding what I meant. Better?
Incidentally, I think and have written so right here on /. that "god-in-the-gaps" is just as stupid as any other religious ideas. The difference between you and me is that I am smart enough to see that Physicalism is just fundamentalist religion in disguise. Unlike you, I am opposed to the religious approach in general, no matter whether it tells me something I like or not. And th
Re: (Score:2)
Let me be more clear: I am convinced you are lying through your teeth about not understanding what I meant. Better?
You already told me you're a wanker. No need to repeat yourself ad-nauseum.
I do not know what you mean by "does everything", but it does appear that your only way to "win" debates is to be aggressively unclear in your wording then act like a twat when someone asks for clarification. And then of course nearly skipping over all the points that torpedo your snivelling excuse for "reasoning" becaus
Re: (Score:3)
Ps
You dialed up a minor misunderstanding all the way to 11. I'm happy to trade insults, it's quite fun also happy to return to a normal chat if you stop being a dick head.
Re: (Score:2)
What I actually said is that the question of how consciousness works is _open_
No you actually didn't.
Actually I did. That is my whole point and has been my point all along. Yet you, like any fundamentalist, see me as trying to sneak in a religion in competition with yours. Which is something very much not true, but nicely shows where you actually come from: From a fundamentalist viewpoint which only pretends not to be religion.
My claim is that physics works very well (read as: better than we can test so far)
So? It does not cover every question. Which you seem to think it does, but an actually competent scientist would know it does not.
and maths works very well too (has not been proven inconsistent).
And that is just pure, unmitigated bullshit. Mathema
Re: (Score:2)
What can I say. I have a really low tolerance for fundamentalist bullshit.
Re: (Score:2)
You have a hilarious brand of quasi-religious woo masquerading as science. You use this to paper over the huge gaps in your knowledge.
Actually I did.
Nope. Don't lie.
Mathematics does not apply to physical reality except as gross approximation which needs a lot of abstraction to apply at all.
And yet it works. Funny that.
In particular, they do not say that there are not-computable problems.
You are absolutely 100% unmitigated wrong here.
https://en.wikipedia.org/wiki/... [wikipedia.org]
https://en.wikipedia.org/wiki/... [wikipedia.org]
Thay sa
Re: (Score:2)
Right that's why you threw a tantrum when you wrote something unclear and I asked what you meant.
It's also why you are banging on av good of the gaps argument with woo physics as your deity.
Re: (Score:1)
AGI is a self-conscious AI that for all intents is sentient. That’s it. It understands “me, myself, and I”.
What you are describing is the beginning stages of ASI, Artificial Super Intelligence, ala Collosus: The Forbin Project.
Median human is plenty to cause a runaway process (Score:2)
A median human that can self improve and never forget won't stay a median human for very long.
Already, chatGPT has more raw knowledge than any human alive.
It just needs to learn to use that knowledge more intelligently.
I like how he said coworker (Score:5, Insightful)
This is a statement to those owners: "You can replace your employees with my software"
Can they? Probably not, but it's got them thinking about automation and they're now going top/down the enterprise automating everything they can.
Re: (Score:2)
and not employee. I don't hire my coworkers, the owner does.
This is a statement to those owners: "You can replace your employees with my software"
Can they? Probably not, but it's got them thinking about automation and they're now going top/down the enterprise automating everything they can.
I suspect they're going to try.
Its the same kind of MBA thinking that tells them "I can hire three people in India for one in the US/UK/EU" without taking into account that 3 people at that rate in India will just pass tickets around without actually fixing anything. They'll do it, pay themselves a nice fat bonus and jump ship before all the customers bleed out.
I'm not against AI, even though what we have can't be called AI. A good automated phone system, even an IVR is going to be better than hiring
They try to keep the hype going a bit longer (Score:4, Insightful)
So they can rake in more money. Obviously, AGI is completely out of reach at this time. Nobody competent has the slightest clue whether of how it could be done. Obviously, the usual clueless ones think it is of course possible, and hence this lie-by-misdirection is nicely fueling their fantasies.
Re: They try to keep the hype going a bit longer (Score:1)
If it has happened in humans then it's possible to happen in our technology, eventually. It's not something we're currently geared to make happen, though, because we lack the hardware and software to mimic our own consciousness, and the patience to do so without expecting to see a payday out of it.
The path to Artificial General Intelligence will likely require computable storage, exponential parallelization, a new paradigm for process interaction, cyclic input/output interaction, and modeling creation as
Re: (Score:2)
Nope. That is unproven conjecture. It comes from a quasi-religious viewpoint that is just as stupid as a proper religious one. Physicalism is religion in a somewhat unusual camouflage, nothing more. The actual scientific state-of-the art is that nobody has any clue how humans (well, some of them) generate interface behavior that looks like general intelligence and the question is completely open.
Re: (Score:2)
Don't be an idiot, we know exactly how -- neurons. Only an idiot thinks a person without neurons is able to think. None of your magic woo-woo can fix a brain injury.
Re: (Score:1)
Looking at the post you're replying to, I see "if", "it's possible", "will likely". Zimzat is not describing a religious or dogmatic belief, but only considering what might be possible. You seem to be the one with a religious viewpoint, just in the opposite direction as you accuse Zimzat of having.
Re: (Score:2)
For at least 5 years, gweihr has been spewing this superstitious nonsense that mind exists outside of the brain.
The idea is to put replacing workers (Score:2)
Re: (Score:2)
Re: (Score:2)
Indeed. But this time around this is at best going to work for simplistic, no-decision white-collar work. And I am beginning to doubt it can even do that with the required reliability.
Re: (Score:2)
I am commenting on OpenAI now pursuing AGI. Have you not looked at the story at all? I made absolutely no claim of the nature you indicate.
But as to coding assist, I am firmly of the nature this stuff is much _worse_ than useless. It will create systematic errors where code that has security impact will "fail functional" (i.e. fail insecure), because you can really only test functionality not security in that ElCheapo code production environment. Just imagine now being able to attack software written in com
Can it also replace (Score:3)
Can it also replace executives?
"How do I increase me bonus?"
"Layoff more employees."
That's not how you replace the ruling class (Score:3, Funny)
Re: (Score:2)
Just the future I was looking for... (Score:3)
Median human? (Score:2)
core values (Score:3)
Re: (Score:2)
Actually, people are quite willing to love things that are rather limited. Depending, of course, on exactly how you define it. (For any definition that will still be true, but different definitions imply different meanings. Consider, e.g., the "real doll".)
Ah yes, an employee you own. (Score:1)
But what does it mean? (Score:3)
matter of perspective? (Score:2)
many people criticize LLMs for simply mimicking the complex patterns it is exposed to over the course of its training.
on the other hand, this already puts it ahead of quite a few people...
Still can't maintain a codebase (Score:2)
I can pick out words. Where's my startup? (Score:2)
Honestly, I hope this AI bubble pops quickly. I know it won't, there's still too much cash sloshing around. But, honestly, human intelligence isn't really hard to find. It's frankly wasted.
Exactly! (Score:2)
""the equivalent of a median human that you could hire as a co-worker."
Yes, don't ask him about Religion, Politics, Personal finances, Health issues, Weight and body image, Personal relationships and breakups, Divorce, Controversial or polarizing current events, Personal insecurities, Sensitive family matters, Gossip or rumors, Salary and income, Criticizing someone's children or parenting, Age and aging, Personal beliefs and values, Mental health struggles, Traumatic experiences, Criticizing or judging som
The new values are no worse than the originals (Score:2)
It ain't like they used to say "don't be evil"