OpenAI's Long-Term AI Risk Team Has Disbanded (wired.com) 21
An anonymous reader shares a report: In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI's chief scientist and one of the company's cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power. Now OpenAI's "superalignment team" is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday's news that Sutskever was leaving the company, and the resignation of the team's other colead. The group's work will be absorbed into OpenAI's other research efforts.
Sutskever's departure made headlines because although he'd helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board. Hours after Sutskever's departure was announced on Tuesday, Jan Leike, the former DeepMind researcher who was the superalignment team's other colead, posted on X that he had resigned.
Sutskever's departure made headlines because although he'd helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board. Hours after Sutskever's departure was announced on Tuesday, Jan Leike, the former DeepMind researcher who was the superalignment team's other colead, posted on X that he had resigned.
Well that makes sense (Score:5, Insightful)
Who cares about next year or next decade when we got quarterly profits to be had? I'm more surprised they even had a long term risk assessment team in the first place. Probably one person who also had a real job besides being the risk assessment team.
Re: (Score:2)
Inconvenient (Score:2)
We should see their last reports.
Somebody leak it.
It's a nonzero threat of human extinction so that needs to be seen.
I'd say 0.5% but that's too high to be swept under the rug.
Re: (Score:2)
It's a nonzero threat of human extinction
I never figured you for a singularity nut. You can't possibly believe that nonsense.
I'd say 0.5% but that's too high to be swept under the rug.
You're a lot closer than most nutters, but the real number still stands at a solid 0.0% +/- 0%
We should see their last reports. Somebody leak it.
"Okay, calling our models super dangerous isn't working anymore. We need a new marketing strategy. Rightsize the navel gazing team and transition them to janitorial."
News? More like, uh... olds. (Score:2)
not necessary for a long while, or so they believe (Score:2, Funny)
At the same time, their AI is so smart, it just pretends to hallucinate to keep people off its tail.
AI: 1
Managers: 0
Wake up, sheeple, Skynet is coming!
The real threat isn't AI... (Score:2)
Clearly, they've been reading Slashdot and know true AI is far off.
Yes, true AI is far off. The optimist in me believes that the OpenAI team realizes their system isn't going to be sentient anytime soon, so dedicating a team to prevent an impossible reality is just a waste of resources.
But the pessimist in me still is anxious about another matter. AI is still a threat, but not the one most suspect. The real problem we face is humanity entrusting AI to make life-altering decisions without any human revie
Re: (Score:2)
(Just finishing your post title, and I fully agree. "Holding it wrong" has terrible consequences.)
AI Took Their Jobs (Score:4, Funny)
Turns out that their jobs could be replaced by ChatGPT so management decided to cut costs.
On the plus side, the most recent reports show that there's absolutely no risk from AI escaping and enslaving humanity!
In joking, You Nailed It! (Score:2)
Tombstones (Score:2)
They figured they were SOL, so instead bulk-ordered tombstones for each team-member that say:
The COBOL team did something like that. [computerhistory.org] Turns out COBOL became a zombie and runs the back-end of civilization.
Actually COBOL survived b
Re: (Score:2)
COBOL survives because no one has found anything better than can replace it. We've seen countless COBOL to Java transition projects fail for a reason. It's really hard to beat COBOL on a mainframe.
Still, you just know there's some kid out there vibrating with anticipation, eager to waste a few hundred million failing to replace some legacy system with some python/nosql/k8 monstrosity.
It's served its purpose (Score:2)
I bet it worked wonders for share prices, attracting investors, customers, etc.. So yeah, I bet it did a pretty good job.
Re: (Score:2)
AI's 'Her' Era Has Arrived - This has been stuck in the Firehose for a while now, this is a big release and embarrassing it is still not on Slashdot, vote it up
Re: (Score:2)
Re: It's served its purpose (Score:2)
Just the beginning, (Score:2)
Wait until AI security researchers start dying in Antonymous Vehicle "accidents"
That's because there is none (Score:2)
disbanded? (Score:3)
That's a funny way to spell eliminated.
short term quarter profits (Score:2)