OpenAI's Head of Robotics Resigns, Says Pentagon Deal Was 'Rushed Without the Guardrails Defined' (engadget.com) 56
In a tweet that's been viewed 1.3 million times in the last six hours, OpenAI's head of robotics announced their resignation. They said they "care deeply about the Robotics team and the work we built together," so this "wasn't an easy call," but offered this reason for resigning:
AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.
This was about principle, not people. I have deep respect for Sam and the team, and I'm proud of what we built together.
"To be clear, my issue is that the announcement was rushed without the guardrails defined," explains a later tweet. "It's a governance concern first and foremost. These are too important for deals or announcements to be rushed." And when asked how many OpenAI employees had left after OpenAI signed their new Pentagon deal, the roboticist said... "I can't share any internal details."
The roboticist previously worked at Meta before leaving to join OpenAI in late 2024, reports Engadget: OpenAI confirmed Kalinowski's resignation and said in a statement to Engadget that the company understands people have "strong views" about these issues and will continue to engage in discussions with relevant parties. The company also explained in the statement that it doesn't support the issues that Kalinowski brought up. "We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons," the OpenAI statement read.
This was about principle, not people. I have deep respect for Sam and the team, and I'm proud of what we built together.
"To be clear, my issue is that the announcement was rushed without the guardrails defined," explains a later tweet. "It's a governance concern first and foremost. These are too important for deals or announcements to be rushed." And when asked how many OpenAI employees had left after OpenAI signed their new Pentagon deal, the roboticist said... "I can't share any internal details."
The roboticist previously worked at Meta before leaving to join OpenAI in late 2024, reports Engadget: OpenAI confirmed Kalinowski's resignation and said in a statement to Engadget that the company understands people have "strong views" about these issues and will continue to engage in discussions with relevant parties. The company also explained in the statement that it doesn't support the issues that Kalinowski brought up. "We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons," the OpenAI statement read.
Re: Pronouns (Score:2)
Another waste of FP branch by AC (Score:2)
The AC's pronoun is "it", so it was just projecting. One theory is that all ACs are eunuchs. That would explain most ACs on Slashdot.
And feeding them is one of those tricks that never works.
Re: (Score:1)
In my estimation only nutjobs who are in favor of those things.
Re: (Score:2)
Eh? The use of plural "they" is kind of weird, since Caitlin is very much a singular woman, and identifies as such.
She has long been a political activist, focussing on tribal identity politics, a woman and a lesbian being more important than being a person or whatever the job was. So OpenAI can hardly act surprised when a scorpion bites them.
Now they have to replace that guy... (Score:2, Insightful)
Re:Now they have to replace that guy... (Score:5, Insightful)
Perhaps.
But at what point do you think someone has to decide they can't fix the problem, are now facilitating it, morally just can't keep doing it, their departure at least serves as a clarion call, and perhaps denial of their skills puts a dent in unfettered progress (presumes they aren't reasonably replaceable anyway)?
There does come a point when you just have to take a stand and nope out.
Re: (Score:2)
Your sig is ironic after that insightful post. +1 funny from me.
Re: (Score:1)
I guess if you think the virtue signaling is worth more than being someone who can influence the outcome from inside, then sure....good on ya.
I don't think this will influence the outcome at all.
Re:Now they have to replace that guy... (Score:4, Insightful)
What if you can't influence the outcome from the inside?
Re: (Score:2)
Re:Now they have to replace that guy... (Score:4, Insightful)
It prevented World War III starting at least 8 years earlier. You should say thank you.
Re:Now they have to replace that guy... (Score:4, Informative)
Really well, it crippled the damage he attempted to do in the first term. It is widely noted that some of Trump's major setbacks were due to competent people saying "no", and incompetent people not sure how to achieve "yes".
Re:Now they have to replace that guy... (Score:5, Interesting)
What is this comment supposed to even mean? Do you think people should work for companies and contribute their skills to something they find repugnant and evil?
Supposedly the professions with the highest number of psychopaths are CEOs and police officers. I would add to that that the forum posters that seem to contain the highest number of psychopaths are posters to Slashdot. Every. Fucking. TIme. Someone. Stands. On. Principle someone here tries to minimize it.
Re: (Score:2)
Re: (Score:2)
If you're in the fray you can either try to influence the outcome or quit.
If your principles dictate that you're at the entrance of a bridge too far, then just say no. There's plenty of work out there for people who know what they're doing. The pearl clutching seems a bit much.
Re: Now they have to replace that guy... (Score:2)
Because very few Americans seem to have principles and that is the highest representation here. In a recent poll, most Americans feel that their fellow Americans are morally bad. That's very telling. In Canada only one in ten feel fellow Canadians are morally bad. So if your American you probably are literally holding yourself back if you act morally, because someone else is just going to step on you.
Re: (Score:3)
Re: (Score:2)
Phear the T-1000 (Score:2)
Re: (Score:3)
You can't learn lessons from causality paradoxes, my friend.
Re: (Score:2)
Paradoxes can be paradoctored. According to Heinlein, anyway.
no shit (Score:3, Interesting)
Sam Altman is so fucking dumb. The most important resource he has is his employees and he didn't think a good chunk of them would have ethical objections to building killbots and spying on the entire US population? It's not as if there are no jobs available for those people.
I know that greed is in Sam's blood and he got to where he is mainly by greed but come on man, have some common sense.
Re: no shit (Score:2)
Arguably greed and an ability to persuade others are more important than anything - even the product.
Re: (Score:2)
The most important resource is that everyone believes OpenAI is 'the' thing. People still seem to be using 'ChatGPT' as the default thing to say even though arguably the least useful of the major LLMs now.
Of course, the bad press of swooping in to take a relative pittance of government money after it was made very public that Anthropic was on the outs for trying to take something that looked like a principle stand is more damaging than anything.
Seems like a very bad sign. (Score:5, Insightful)
Re: Seems like a very bad sign. (Score:2)
The wars out front should have told you that much.
Re: (Score:2)
Re: (Score:2)
but don't seem notably different in their brutality
The wars are just beginning. You haven't seen anything yet.
Re: (Score:2)
I have absolutely zero reason to suspect that the goals are anything less than "worse than I can imagine"; especially when such a tantrum is being thrown about the importance of unfettered access to features that the DoD doesn't currently use; I'm just not seeing anything in the wars currently available for inspection that suggests the sort of significant break
Re: (Score:2)
We'll see how it goes. Hopefully fewer will suffer rather than more, but wars have this inconvenient feature that they are easy to start and much harder to control and end.
Re: (Score:2)
Current admin wants no guardrails, so par for the (Score:3)
It's pretty clear the "problem" the admin has with Anthropic is that Anthropic wants some minimal (probably very inadequate) guardrails on the use of their AI while the current admin wants nothing at all.
Just let AI do the killing, all is well. (And surveillance, various forms of law-breaking and privacy violation, whatever . . . )
Re: Current admin wants no guardrails, so par for (Score:2)
Presumably the guard rails would protect the military from their own hardware too - is this something they care about?
Re: (Score:3)
You’re already asking more qualified questions than the administration.
Re: (Score:2)
OpenAI head of robotics announced their resignatio (Score:2)
Were there two of them or are you referring to the corporeal Kalinowski and their/them virtual avatar.
As if Sam could care less (Score:2)
A 1000 new candidates who could also care less based on the salary proposal.
\o/ (Score:2)
Despite our statement, our head of robotics is not convinced.
Respect???? (Score:2)
How the frig can anyone have respect for Little Sammy? I'd like to say he lies like Trump does but that's a pretty high bar.
Re: (Score:2)
Bugger, I should've said low bar.
"previously worked at Meta" (Score:2)
And has suddenly discovered "principles"? OK...
six seven (Score:1)
One tries to maintain the impression that one's military is not just the bunch of gung-ho gerrymandering mad bombers they could easily appear to be.
It appears one wastes one's time. Them honkies want robots that kill for em. The whole suite of knock-on effects that follow from this attitude, pertain.
It's been over a long time already folks. nothing left to do but the crying.
At meta? (Score:2)
At meta? It ain't about ethics, or morals.
It's about money.
The so-called "guardrails" (Score:2)
Guardrails... does a semi with a double trailer/2 trailers (whatever you wanna call it) obey the rails on the side of the highway/freeway 100% of the time?
Those are literally guides... it's totally possible to go beyond them in a semi... why is an LLM-AI any different?
The proper thing to do is _do not hook it up to anything important (or critical)_... or just delete it, and do things yourself.
"I have deep respect for Sam" (Score:2)
And what part of her principles allowed her to work in the first place for a closed-source for-profit company that was sucked out of an open-source non-profit ?
No guardrails, only MAXIMUM LETHALITY (Score:2)
that is what you want, right, Hegseth?
They is dumb as a bag of rocks, them is (Score:2)
Anyone who is genuinely confused about whether they are ONE person or MANY shouldn't be in charge of anything, much less AI robotics intended for use by the military. What nonsense. Good riddance.
they (Score:2)
They is a lesbian activist. Openai should be happy to be rid of her.