Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
AI

OpenAI is Hiring a New 'Head of Preparedness' to Predict/Mitigate AI's Harms (engadget.com) 42

An anonymous reader shared this report from Engadget: OpenAI is looking for a new Head of Preparedness who can help it anticipate the potential harms of its models and how they can be abused, in order to guide the company's safety strategy.

It comes at the end of a year that's seen OpenAI hit with numerous accusations about ChatGPT's impacts on users' mental health, including a few wrongful death lawsuits. In a post on X about the position, OpenAI CEO Sam Altman acknowledgedthat the "potential impact of models on mental health was something we saw a preview of in 2025," along with other "real challenges" that have arisen alongside models' capabilities. The Head of Preparedness "is a critical role at an important time," he said.

Per the job listing, the Head of Preparedness (who will make $555K, plus equity), "will lead the technical strategy and execution of OpenAI's Preparedness framework, our framework explaining OpenAI's approach to tracking and preparing for frontier capabilities that create new risks of severe harm."

"These questions are hard," Altman posted on X.com, "and there is little precedent; a lot of ideas that sound good have some real edge cases... This will be a stressful job and you'll jump into the deep end pretty much immediately."

The listing says OpenAI's Head of Preparedness "will lead a small, high-impact team to drive core Preparedness research, while partnering broadly across Safety Systems and OpenAI for end-to-end adoption and execution of the framework." They're looking for someone "comfortable making clear, high-stakes technical judgments under uncertainty."

OpenAI is Hiring a New 'Head of Preparedness' to Predict/Mitigate AI's Harms

Comments Filter:
  • by easyTree ( 1042254 ) on Saturday December 27, 2025 @08:41PM (#65885565)

    If you're concerned - why not stop?

    • by Anonymous Coward

      "If you're concerned - why not stop?", asked the child yet to form an understanding of the real world to the addict.

    • Re:\o/ (Score:4, Insightful)

      by phantomfive ( 622387 ) on Saturday December 27, 2025 @09:46PM (#65885637) Journal
      Read the summary at least. They aren't concerned, they want someone to help them avoid lawsuits.
    • Re:\o/ (Score:5, Insightful)

      by gweihir ( 88907 ) on Saturday December 27, 2025 @09:53PM (#65885649)

      They are not concerned. They KNOW they are doing a lot of damage. But the money they make is more important to them.

      • And people wonder why the peasants want to burn mad scientists and their castles.
        • by gweihir ( 88907 )

          Only that these people are not really scientists and, insofar as they are, they are really not representative.
          But yes, people generally are not capable of making these finer distinctions.

    • Any sufficiently profitable industry wants to self-regulate. The alternative is to be regulated, which limits profits and can even lead to having to pay for damages caused.

      Regulation is always inevitable, but arguing that you will regulate yourself is a way of prolonging the inevitable.

      Consider a child who negotiates a longer curfew by saying that they will go to bed on time, without the usual complaints or delays, in return for being allowed to stay up longer. It's always a lie, but it's cute. And even

      • Any sufficiently profitable industry wants to self-regulate.

        Only in the most general sense of "regulate", to establish a structure that prevent its profits from eroding.

    • by tlhIngan ( 30335 )

      The whole point is to keep the money train rolling.

      Articles about the problems of AI need to be countered and such before they start drying up all the money.

      Think about it - they want AI everywhere, but articles where users could convince a vending machine to give away everything, even things like a PS5 do no inspire confidence. Same thing as AI agentic web browsers that could be taken over by some hidden text.

      Articles like this do not encourage people to open their wallets or embrace AI technology.

      There's

    • ...through fear-mongering and telling congress that people shouldn't have access to AI that OpenAI doesn't control.

  • Just like your models do all the time.

  • "... in order to guide the company's safety strategy."

    The more interesting thing is what "safety strategy" means. The job is most definitely NOT to improve or ensure safety, it's to provide the appearance that they care about safety. They are to produce metrics that show safety, not to actually improve safety.

    Making public the salary is interesting, especially with the recent talk about how AI engineers are paid much more than this position. Odd that would be true.

    • As a shareholder I'd be interested in metrics to see how many metrics they're producing. Safety is overrated - I want to sit on my ass whilst it rains money!

  • They should hire a head of preparedness for post AI bubble pop. It is not that Nvidia or Open AI are worthless companies offering worthless technologies, but they will have to go through bankruptcies to discard all the debt. Having someone start preparing for post-bankruptcy would be highly beneficial even to current investors, whom might get extra penny on a dollar in the settlements.
    • The job of mopping up the AI-bubble consequences will be, as always, with everyone else, in this case their "elected representatives".

      It will be just like the last time, when the Obama government was forced to mop up the consequences of the "subprime crisis", a product of the Bush era policies of deregulation and the "quant easing" of one Greenspan, exploited by the "investment banking community".

      This time some other government will have to mop up the consequences of the trumpist voluntarism and ignorance,

  • I can only hope the job requirements include :

    - Ability to be nearby in our data center with a large bucket of salt water ready to take action if the "safe word" is sounded.

  • by JoeyRox ( 2711699 ) on Saturday December 27, 2025 @09:30PM (#65885623)
    After a hurricane devastates Springfield the church sign read "God Welcomes His Victims".
  • These people obviously do not care what amount of damage they do.

    • $555k? That salary sounds, ermmm.... artificial.
    • damage?

      to stupid people who would blindly do what an AI tells them? They would join a dumb cult just as easily. or get conned.

      Seems the "safety" is only needed for snowflakes and morons.

      • by gweihir ( 88907 )

        Spoken like a true, self-absorbed asshole. Most people _are_ morons and that needs to be reflected in any product targeted at a general audience.

        • No, most people have the common sense not to do a harmful thing an AI tells them to do. Most kids have parents and teachers that can spot the signs of severe mental illness long before any online post or Ai chat can "drive them to suicide", and really blaming the AI in that case is shifting blame from more than one actually guilty perp.

    • Maybe it's more of a "If we do all this damage, can you protect us?" type thing. The person hired could be responsible for, among other things, hiring mercenaries, and building large underground hidden bunkers that the AI boosters can hide in when the peasants start revolting.

  • ...to humanity if they hired John Connor.

  • I'll ensure at least a dozen John Connors are born and trained since childhood in the art of leading an anti machine war effort as a backup.
  • by rknop ( 240417 ) on Saturday December 27, 2025 @10:47PM (#65885703) Homepage

    ...but the inevitable firing of this person when bad things happen and this person failed to stop them, will allow OpenAI to say "see! Look! We're doing something about how terrible we are!"

  • ...is to create the illusion that they're close to some kind of breakthrough that will finally make LLM-based AI competent enough to be both useful and dangerous.

    When in fact they're still not even close to finding a way to make LLMs stop fabricating bullshit half the time you use them.

    • "Yes, AGI is around the corner"
      "Really?"
      "Yes, we've already made a really good autocomplete engine that almost looks useful unless you know how it works, clearly we're going to produce something capable of thought next. I mean, that's just logic!"

      I cannot believe everyone's losing their jobs over this shit. Well, someone should be, but not the people who are actually losing them.

      • Not many people have lost their jobs to AI. Companies are attributing layoffs to AI, but that's not the same thing at all.

        Having layoffs because "we desperately need to fix our balance sheet" makes Wall Street nervous, whereas having layoffs because "AI is magical productivity sauce" makes Wall Street happy. CEOs say what Wall Street wants to hear.

        At some point, Wall Street will catch on. Datacenter builders are having to pay higher interest rates lately, and that's an encouraging sign.

  • Not for so little money.

    When they pay millions for the people creating the crap, they need to pay at least the same amount to the people cleaning up the mess.

  • CloudAI is the inevitable future of enterprise computing, in which your entire IT fiefdom hangs cheerfully off a single strategic VM image running on some distant, humming cluster you’ll never see outside of a data center you will never visit and cannot pronounce.

    Multiple cloned instances lurk behind a load balancer, scattered across at least two availability zones or hosts, all wrapped in layers of redundancy and recovery plans that look stunning in slide decks and almost never get tested on purpo
  • It sure is good that the job description is for 'new risks' only; or someone might have an awkward onboarding after they lay out the 'a sociopathic carnival barker has stampeded the herd into an increasingly dire looking bubble' problem.
  • I suspect that this is a more transparent communication of the actual priorities and anything else in the job description; but $550k seems quite low for the scope of the role and the amount of 'deep technical expertise' and managerial viability they want in various areas.

    It's not a tiny amount of money in absolute terms; but they seem to be mashing together more or less all the qualities you'd want in a CIO or IT director whose tenure will include executing some important but totally banal projects(nothi
  • Sounds like a rare item in a RPG

  • Preventing the company from profiting off the customer's misfortune seems kinda "woke" to me.
  • At least call this what it is. The HPLD department's goal will be preemptively ready to instantly disprove all predicted lawsuits.
  • protect Sam Altmans net worth,

On a clear disk you can seek forever. -- P. Denning

Working...