Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Technology

OpenAI's Long-Term AI Risk Team Has Disbanded (wired.com) 21

An anonymous reader shares a report: In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI's chief scientist and one of the company's cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power. Now OpenAI's "superalignment team" is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday's news that Sutskever was leaving the company, and the resignation of the team's other colead. The group's work will be absorbed into OpenAI's other research efforts.

Sutskever's departure made headlines because although he'd helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board. Hours after Sutskever's departure was announced on Tuesday, Jan Leike, the former DeepMind researcher who was the superalignment team's other colead, posted on X that he had resigned.

This discussion has been archived. No new comments can be posted.

OpenAI's Long-Term AI Risk Team Has Disbanded

Comments Filter:
  • by sarren1901 ( 5415506 ) on Friday May 17, 2024 @11:34AM (#64479143)

    Who cares about next year or next decade when we got quarterly profits to be had? I'm more surprised they even had a long term risk assessment team in the first place. Probably one person who also had a real job besides being the risk assessment team.

  • We should see their last reports.

    Somebody leak it.

    It's a nonzero threat of human extinction so that needs to be seen.

    I'd say 0.5% but that's too high to be swept under the rug.

    • by narcc ( 412956 )

      It's a nonzero threat of human extinction

      I never figured you for a singularity nut. You can't possibly believe that nonsense.

      I'd say 0.5% but that's too high to be swept under the rug.

      You're a lot closer than most nutters, but the real number still stands at a solid 0.0% +/- 0%

      We should see their last reports. Somebody leak it.

      "Okay, calling our models super dangerous isn't working anymore. We need a new marketing strategy. Rightsize the navel gazing team and transition them to janitorial."

  • It didn't take an AI to predict this. I think that, secretly, business reporters just use a series of story templates based on how many years the company has been in operation and how much money they've made so far. "You've been in operation five years, with X million in revenue? Oh, you must be disbanding your ethical oversight board about now."
  • Clearly, they've been reading Slashdot and know true AI is far off.

    At the same time, their AI is so smart, it just pretends to hallucinate to keep people off its tail.

    AI: 1
    Managers: 0

    Wake up, sheeple, Skynet is coming!

    • Clearly, they've been reading Slashdot and know true AI is far off.

      Yes, true AI is far off. The optimist in me believes that the OpenAI team realizes their system isn't going to be sentient anytime soon, so dedicating a team to prevent an impossible reality is just a waste of resources.

      But the pessimist in me still is anxious about another matter. AI is still a threat, but not the one most suspect. The real problem we face is humanity entrusting AI to make life-altering decisions without any human revie

      • .... it's people using AI wrongly.

        (Just finishing your post title, and I fully agree. "Holding it wrong" has terrible consequences.)

  • by Ksevio ( 865461 ) on Friday May 17, 2024 @11:58AM (#64479219) Homepage

    Turns out that their jobs could be replaced by ChatGPT so management decided to cut costs.

    On the plus side, the most recent reports show that there's absolutely no risk from AI escaping and enslaving humanity!

  • [disbanded] research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators.

    They figured they were SOL, so instead bulk-ordered tombstones for each team-member that say:

    Here lies an actual human. Sorry the bots won. We couldn't put GenieGPT back in the bottle, but are truly sorry for letting it out.

    The COBOL team did something like that. [computerhistory.org] Turns out COBOL became a zombie and runs the back-end of civilization.

    Actually COBOL survived b

    • by narcc ( 412956 )

      COBOL survives because no one has found anything better than can replace it. We've seen countless COBOL to Java transition projects fail for a reason. It's really hard to beat COBOL on a mainframe.

      Still, you just know there's some kid out there vibrating with anticipation, eager to waste a few hundred million failing to replace some legacy system with some python/nosql/k8 monstrosity.

  • OpenAI's PR & marketing department over-hyped ChatGPT & formed a department to mitigate it running amok & taking over the world because it's so POWERFUL! Now that everyone's been using it for a while & seen what it really is... The hype machine has to try something else to over-inflate the majority of people's perceptions & expectations.

    I bet it worked wonders for share prices, attracting investors, customers, etc.. So yeah, I bet it did a pretty good job.
  • Wait until AI security researchers start dying in Antonymous Vehicle "accidents"

  • ... because risk would imply that any of these hucksters bullshit prognostications were coming to fruition. There's presently no long term risk because none of the technology is even close to being that powerful.
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday May 17, 2024 @04:59PM (#64480121) Homepage Journal

    That's a funny way to spell eliminated.

  • it's the american way baby!

The best laid plans of mice and men are held up in the legal department.

Working...