OpenAI is Hiring a New 'Head of Preparedness' to Predict/Mitigate AI's Harms (engadget.com) 42
An anonymous reader shared this report from Engadget:
OpenAI is looking for a new Head of Preparedness who can help it anticipate the potential harms of its models and how they can be abused, in order to guide the company's safety strategy.
It comes at the end of a year that's seen OpenAI hit with numerous accusations about ChatGPT's impacts on users' mental health, including a few wrongful death lawsuits. In a post on X about the position, OpenAI CEO Sam Altman acknowledgedthat the "potential impact of models on mental health was something we saw a preview of in 2025," along with other "real challenges" that have arisen alongside models' capabilities. The Head of Preparedness "is a critical role at an important time," he said.
Per the job listing, the Head of Preparedness (who will make $555K, plus equity), "will lead the technical strategy and execution of OpenAI's Preparedness framework, our framework explaining OpenAI's approach to tracking and preparing for frontier capabilities that create new risks of severe harm."
"These questions are hard," Altman posted on X.com, "and there is little precedent; a lot of ideas that sound good have some real edge cases... This will be a stressful job and you'll jump into the deep end pretty much immediately."
The listing says OpenAI's Head of Preparedness "will lead a small, high-impact team to drive core Preparedness research, while partnering broadly across Safety Systems and OpenAI for end-to-end adoption and execution of the framework." They're looking for someone "comfortable making clear, high-stakes technical judgments under uncertainty."
It comes at the end of a year that's seen OpenAI hit with numerous accusations about ChatGPT's impacts on users' mental health, including a few wrongful death lawsuits. In a post on X about the position, OpenAI CEO Sam Altman acknowledgedthat the "potential impact of models on mental health was something we saw a preview of in 2025," along with other "real challenges" that have arisen alongside models' capabilities. The Head of Preparedness "is a critical role at an important time," he said.
Per the job listing, the Head of Preparedness (who will make $555K, plus equity), "will lead the technical strategy and execution of OpenAI's Preparedness framework, our framework explaining OpenAI's approach to tracking and preparing for frontier capabilities that create new risks of severe harm."
"These questions are hard," Altman posted on X.com, "and there is little precedent; a lot of ideas that sound good have some real edge cases... This will be a stressful job and you'll jump into the deep end pretty much immediately."
The listing says OpenAI's Head of Preparedness "will lead a small, high-impact team to drive core Preparedness research, while partnering broadly across Safety Systems and OpenAI for end-to-end adoption and execution of the framework." They're looking for someone "comfortable making clear, high-stakes technical judgments under uncertainty."
\o/ (Score:3)
If you're concerned - why not stop?
Re: (Score:1)
"If you're concerned - why not stop?", asked the child yet to form an understanding of the real world to the addict.
Global mindshift needed towards abundance thinking (Score:3)
You're right that there are addictive aspects to AI (mis)use, but it goes even further. My comment on Google's hiring post-AGI scientist from April 15, 2025:
https://slashdot.org/comments.... [slashdot.org]
"I've spent decades writing about all this, summarized by my sig: https://pdfernhout.net/ [pdfernhout.net] "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
Seriously, I'm just the kind o
Re:\o/ (Score:4, Insightful)
Re:\o/ (Score:5, Insightful)
They are not concerned. They KNOW they are doing a lot of damage. But the money they make is more important to them.
Re: (Score:2)
Re: (Score:3)
Only that these people are not really scientists and, insofar as they are, they are really not representative.
But yes, people generally are not capable of making these finer distinctions.
Re: (Score:3)
Regulation is always inevitable, but arguing that you will regulate yourself is a way of prolonging the inevitable.
Consider a child who negotiates a longer curfew by saying that they will go to bed on time, without the usual complaints or delays, in return for being allowed to stay up longer. It's always a lie, but it's cute. And even
Re: (Score:3)
Any sufficiently profitable industry wants to self-regulate.
Only in the most general sense of "regulate", to establish a structure that prevent its profits from eroding.
Re: (Score:2)
The whole point is to keep the money train rolling.
Articles about the problems of AI need to be countered and such before they start drying up all the money.
Think about it - they want AI everywhere, but articles where users could convince a vending machine to give away everything, even things like a PS5 do no inspire confidence. Same thing as AI agentic web browsers that could be taken over by some hidden text.
Articles like this do not encourage people to open their wallets or embrace AI technology.
There's
Because it's about killing open-source AI... (Score:2)
...through fear-mongering and telling congress that people shouldn't have access to AI that OpenAI doesn't control.
High-stakes decisions under uncertainty (Score:2)
Just like your models do all the time.
as expected (Score:2)
"... in order to guide the company's safety strategy."
The more interesting thing is what "safety strategy" means. The job is most definitely NOT to improve or ensure safety, it's to provide the appearance that they care about safety. They are to produce metrics that show safety, not to actually improve safety.
Making public the salary is interesting, especially with the recent talk about how AI engineers are paid much more than this position. Odd that would be true.
Re: (Score:1)
As a shareholder I'd be interested in metrics to see how many metrics they're producing. Safety is overrated - I want to sit on my ass whilst it rains money!
Pop goes the bubble (Score:2)
Re: (Score:3)
The job of mopping up the AI-bubble consequences will be, as always, with everyone else, in this case their "elected representatives".
It will be just like the last time, when the Obama government was forced to mop up the consequences of the "subprime crisis", a product of the Bush era policies of deregulation and the "quant easing" of one Greenspan, exploited by the "investment banking community".
This time some other government will have to mop up the consequences of the trumpist voluntarism and ignorance,
Job requirements.... (Score:2)
I can only hope the job requirements include :
- Ability to be nearby in our data center with a large bucket of salt water ready to take action if the "safe word" is sounded.
Reminds me of the Simmons Episode... (Score:3)
More like "head of appearing to do something"... (Score:2)
These people obviously do not care what amount of damage they do.
Re: (Score:2)
Re: (Score:1)
Why not make it $666k?
Re: (Score:1)
damage?
to stupid people who would blindly do what an AI tells them? They would join a dumb cult just as easily. or get conned.
Seems the "safety" is only needed for snowflakes and morons.
Re: (Score:2)
Spoken like a true, self-absorbed asshole. Most people _are_ morons and that needs to be reflected in any product targeted at a general audience.
Re: (Score:1)
No, most people have the common sense not to do a harmful thing an AI tells them to do. Most kids have parents and teachers that can spot the signs of severe mental illness long before any online post or Ai chat can "drive them to suicide", and really blaming the AI in that case is shifting blame from more than one actually guilty perp.
Re: (Score:2)
Maybe it's more of a "If we do all this damage, can you protect us?" type thing. The person hired could be responsible for, among other things, hiring mercenaries, and building large underground hidden bunkers that the AI boosters can hide in when the peasants start revolting.
This will only be useful... (Score:2)
...to humanity if they hired John Connor.
Re: (Score:2)
More important would be to fire Dyson.
Hire me! (Score:2)
Not just the hiring... (Score:3)
...but the inevitable firing of this person when bad things happen and this person failed to stop them, will allow OpenAI to say "see! Look! We're doing something about how terrible we are!"
The real goal... (Score:2)
...is to create the illusion that they're close to some kind of breakthrough that will finally make LLM-based AI competent enough to be both useful and dangerous.
When in fact they're still not even close to finding a way to make LLMs stop fabricating bullshit half the time you use them.
Re: (Score:2)
"Yes, AGI is around the corner"
"Really?"
"Yes, we've already made a really good autocomplete engine that almost looks useful unless you know how it works, clearly we're going to produce something capable of thought next. I mean, that's just logic!"
I cannot believe everyone's losing their jobs over this shit. Well, someone should be, but not the people who are actually losing them.
Re: (Score:3)
Not many people have lost their jobs to AI. Companies are attributing layoffs to AI, but that's not the same thing at all.
Having layoffs because "we desperately need to fix our balance sheet" makes Wall Street nervous, whereas having layoffs because "AI is magical productivity sauce" makes Wall Street happy. CEOs say what Wall Street wants to hear.
At some point, Wall Street will catch on. Datacenter builders are having to pay higher interest rates lately, and that's an encouraging sign.
Guide safety strategy for $555000? (Score:2)
Not for so little money.
When they pay millions for the people creating the crap, they need to pay at least the same amount to the people cleaning up the mess.
CloudAI is the inevitable future :o (Score:2)
Multiple cloned instances lurk behind a load balancer, scattered across at least two availability zones or hosts, all wrapped in layers of redundancy and recovery plans that look stunning in slide decks and almost never get tested on purpo
Ah, 'new risks' only... (Score:2)
Seems weirdly low. (Score:2)
It's not a tiny amount of money in absolute terms; but they seem to be mashing together more or less all the qualities you'd want in a CIO or IT director whose tenure will include executing some important but totally banal projects(nothi
Head of preparedness (Score:2)
Sounds like a rare item in a RPG
Just another DEI position (Score:1)
Head of Prepared Lawsuit Defense. (Score:2)
prime directive (Score:2)