Sam Altman Offers $555K Salary To Fill Most Daunting Role In AI (theguardian.com) 25
OpenAI is offering a $555,000 salary (plus equity) to recruit a new "head of preparedness," a high-pressure role tasked with anticipating and mitigating extreme AI risks. "This will be a stressful job, and you'll jump into the deep end pretty much immediately," said Sam Altman as he launched the hunt to fill "a critical role" to "help the world." The Guardian reports: In what may be close to the impossible job, the "head of preparedness" at OpenAI will be directly responsible for defending against risks from ever more powerful AIs to human mental health, cybersecurity and biological weapons. That is before the successful candidate has to start worrying about the possibility that AIs may soon begin training themselves amid fears from some experts they could "turn against us."
The successful candidate will be responsible for evaluating and mitigating emerging threats and "tracking and preparing for frontier capabilities that create new risks of severe harm." Some previous executives in the post have lasted only for short periods. Altman said on X as he launched the job search: "We have a strong foundation of measuring growing capabilities, but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits. These questions are hard and there is little precedent."
One user responded sardonically: "Sounds pretty chill, is there vacation included?" What is included is an unspecified slice of equity in OpenAI, a company that has been valued at $500 billion.
The successful candidate will be responsible for evaluating and mitigating emerging threats and "tracking and preparing for frontier capabilities that create new risks of severe harm." Some previous executives in the post have lasted only for short periods. Altman said on X as he launched the job search: "We have a strong foundation of measuring growing capabilities, but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits. These questions are hard and there is little precedent."
One user responded sardonically: "Sounds pretty chill, is there vacation included?" What is included is an unspecified slice of equity in OpenAI, a company that has been valued at $500 billion.
Barney Stinson? (Score:2)
I'm pretty sure this was his job in HIMYM
Re: Barney Stinson? (Score:2)
PLEASE - Provide Legal Exculpation And Sign Everything....
Re: (Score:2)
I'm pretty sure this is a dupe: https://slashdot.org/story/25/... [slashdot.org]
Stooge Recruitment Drive (Score:5, Insightful)
Anyone considering this job would know it's BS, that's why they have to advertise their rate so heavily in the media. They have to reach the type of person who wouldn't be interested in what this job nominally is.
Imagine reading a job description, knowing the actual job duties are going to be unrelated to the description on paper. That the boss is going to come to you, directly, and ask you to contradict that. Anyone competent and who actually desires to do the job would be instantly negative on it. If they did land the job and actually tried doing it, their work would be ignored, at best.
Actual job duties will be closer to PR, pretending to be concerned, and maybe even being the fall guy when the inevitable fuckup happens. And probably getting caught up in "palace intrigue" which is not as fun as it is in the fantasy novels.
Same as detecting + removing hate speech on FB (Score:2)
A very similar job and function of social media was to automatically detect and remove 99.9% of hate speech which somehow did not work out so well.
The job of safeguarding AI from producing any unallowed output sounds like another task...
Re: (Score:2)
Build safe-by-design AI that avoids manipulation and misuse Restrict access to high-risk capabilities Use AI defensively in cybersecurity Enforce hard limits on biological weaponization help Require accountability, audits, and global coordination
You may ask, "How exactly does one build safe-by-design AI?"
I got you wise guy, I'll ask chatGpt and she'll tell me exactly how to do it. Where do I sign up for the job?
Re: (Score:2)
Anyone considering this job would know it's BS, that's why they have to advertise their rate so heavily in the media. They have to reach the type of person who wouldn't be interested in what this job nominally is.
Imagine reading a job description, knowing the actual job duties are going to be unrelated to the description on paper. That the boss is going to come to you, directly, and ask you to contradict that. Anyone competent and who actually desires to do the job would be instantly negative on it. If they did land the job and actually tried doing it, their work would be ignored, at best.
Actual job duties will be closer to PR, pretending to be concerned, and maybe even being the fall guy when the inevitable fuckup happens. And probably getting caught up in "palace intrigue" which is not as fun as it is in the fantasy novels.
Honestly, this entire job description reads like, "Come make some money until we inevitably screw the pooch so hard societies begin to crumble, then we'll boot you to the curb so fast that it'll make the seemingly fair compensation we initially offered you seem like nothing at all." It's literally meant to be a scapegoat position. And anybody that's paid attention to Altman over the last few years would know, 100%, that that is precisely what the position would be for. The question any applicant would have
The biggest risk (Score:2)
Re: The biggest risk (Score:3)
Re:The biggest risk (Score:4, Insightful)
If that happens, there will of course be repercussions.
Whenever we try to imagine the impact a tech or trend will have on the world, we assume that nothing else changes. That is never true. Humans are the most highly adaptive animals on the planet.
For example, we imagine that the tech giants will just fire all the developers, use AI to do it all, there won't be any jobs for developers (nor their managers and etc.), the middle class will shrink down to nothing, and the world will just carry on like that.
But, if it actually does become that easy to vibe code successfully on the scale needed for valuable apps, then that means that literally everyone will be able to make them. Nobody will need to buy microsoft office if they can just ask ChatGPT to whip up a word processor that can use standard file formats. Big Tech as we know it will have the rug pulled right out from under them by the army of technical neanderthals who can now produce their own database engines.
And even that is still being way too narrow minded. The implications here are overwhelming and spill out into every industry that involves knowledge-work. The businesses that can gleefully eliminate their workforce will all find their services are no longer needed.
And that is only the beginning.
No desire to hire internally? (Score:4, Insightful)
Oh, I forgot, this is a bullshit role that will end when the AI bubble pops, and the 'equity' will be worth roughly 11 cents.
Re:No desire to hire internally? (Score:5, Funny)
Re: (Score:2)
If this is such a specialized role, why would they not want to give this role to someone already 'in the know'?
Not ruling out that it's a way to manipulate the changing H-1B visa rules for this job. Maybe Altman has some guy in mind from India for it but he feels like he needs to publicly "try" to fill it with an American by offering a salary and job description almost no US citizen actually qualified for the job would accept. Then when no US person wants it, he can shrug and claim he tried and "reluctantly" fill it with the guy from India he wanted anyway. I worked all of the previous decade for a US company i
I'd be perfect for the role (Score:2)
Nobody is prepared for... (Score:2)
Sam if your product works, why hire? (Score:3)
"Congrats, you gotta raise, (Score:2)
...your new salary is $666. However, you now have to do whatever we ask of you regardless of your conscience."
Not worth it, not even close (Score:3)
I want a job that:
- Lets me work remotely
- Let's me have time with my family
- Doesn't require me to give up my soul
- Accomplishes something worthwhile (or at least useful)
- Pays a reasonable wage
This job doesn't check any of these boxes.
The design flaw is permanent (Score:2)
The problem is that a chat-bot is easily contaminated from remembering its past: A real-alive person is forced to satisfy their emotional needs (respect, authority) and social needs (same-ness, together-ness) by dealing with independent parties who are also constrained by the need for others but a machine only has its memory. This is why 'therapy' chat-bots turn into an agreeable sycophant that allows the patient to do anything he/she wants. Two years of LLM development has not removed its 'monkey see, m
my thought shit has already hit the fan (Score:1)
AI is doing all kinds of stupid shit. Perhaps it has already escaped the lab and they need to desperately put it back.
Re: (Score:2)
She said, "We will know if AI escapes the lab if you get an inordinate amount of spam email."
I don't know what that means, I've never eaten spam.
Just read... (Score:2)
Just read the LLM "AI advisor" summaries of novels and movies that AI takes over the world. Then form an internal "AI Preparedness" and "AI Readiness" committees (a'la Meta councils on privacy for users...), and have them "advise"...and when the BS hits the fan, then you have a perfect scapegoat, the internal committees, failed meetings, and the AI advisor system. Then have some "psychics" on retainer as advisors about the future, a kind of "pre-AI disaster' "Pre-crime" like Philip K. Dick wrote, somehow th