Forgot your password?
typodupeerror
AI Robotics The Military

OpenAI's Head of Robotics Resigns, Says Pentagon Deal Was 'Rushed Without the Guardrails Defined' (engadget.com) 56

In a tweet that's been viewed 1.3 million times in the last six hours, OpenAI's head of robotics announced their resignation. They said they "care deeply about the Robotics team and the work we built together," so this "wasn't an easy call," but offered this reason for resigning: AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.

This was about principle, not people. I have deep respect for Sam and the team, and I'm proud of what we built together.

"To be clear, my issue is that the announcement was rushed without the guardrails defined," explains a later tweet. "It's a governance concern first and foremost. These are too important for deals or announcements to be rushed." And when asked how many OpenAI employees had left after OpenAI signed their new Pentagon deal, the roboticist said... "I can't share any internal details."

The roboticist previously worked at Meta before leaving to join OpenAI in late 2024, reports Engadget: OpenAI confirmed Kalinowski's resignation and said in a statement to Engadget that the company understands people have "strong views" about these issues and will continue to engage in discussions with relevant parties. The company also explained in the statement that it doesn't support the issues that Kalinowski brought up. "We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons," the OpenAI statement read.
This discussion has been archived. No new comments can be posted.

OpenAI's Head of Robotics Resigns, Says Pentagon Deal Was 'Rushed Without the Guardrails Defined'

Comments Filter:
  • ...With someone less principled. That'll teach 'em.
  • At least someone learned the lessons from the Terminator franchise and decided not to build skynet. No amount of money can save you from Arnold or the T-1000 robot coming back in time to kill you before you enable the robot apocalypse.
    • T-800s job was to kill John so he couldn't lead the resistance. Which led Kyle to go back and save him, (and inadvertently father him). This is a causality paradox.

      You can't learn lessons from causality paradoxes, my friend.
  • no shit (Score:3, Interesting)

    by Anonymous Coward on Saturday March 07, 2026 @06:32PM (#66028610)

    Sam Altman is so fucking dumb. The most important resource he has is his employees and he didn't think a good chunk of them would have ethical objections to building killbots and spying on the entire US population? It's not as if there are no jobs available for those people.

    I know that greed is in Sam's blood and he got to where he is mainly by greed but come on man, have some common sense.

    • Arguably greed and an ability to persuade others are more important than anything - even the product.

    • by Junta ( 36770 )

      The most important resource is that everyone believes OpenAI is 'the' thing. People still seem to be using 'ChatGPT' as the default thing to say even though arguably the least useful of the major LLMs now.

      Of course, the bad press of swooping in to take a relative pittance of government money after it was made very public that Anthropic was on the outs for trying to take something that looked like a principle stand is more damaging than anything.

  • by fuzzyfuzzyfungus ( 1223518 ) on Saturday March 07, 2026 @06:36PM (#66028630) Journal
    If a plan is enough to alarm someone who worked for facebook and somehow respects sam altman it seems fair to assume that it's a really dire plan.
    • The wars out front should have told you that much.

      • I wish it were so; but I'm not so sure. The wars we've seen so far have been exceptionally dumb and impulsive; even by the low standards of something like Bush Jr, which is saying something; but don't seem notably different in their brutality vs. every other "lie about the urgency that brings us here; be all surprised pikachu when our 'smart' weapons mulch a bunch of civilians and offer platitudes about how careful we thought we were being" excercise back to at least Gulf War 1. The 'just declare that every
        • but don't seem notably different in their brutality

          The wars are just beginning. You haven't seen anything yet.

          • That was why I specified "the wars we've seen so far" and specifically called out yet worse future aspirations as the likely point of the concern.

            I have absolutely zero reason to suspect that the goals are anything less than "worse than I can imagine"; especially when such a tantrum is being thrown about the importance of unfettered access to features that the DoD doesn't currently use; I'm just not seeing anything in the wars currently available for inspection that suggests the sort of significant break
            • We'll see how it goes. Hopefully fewer will suffer rather than more, but wars have this inconvenient feature that they are easy to start and much harder to control and end.

    • Excellent call, my thoughts exactly. The guy with the track record of broken moral compasses finds things are going too far, perhaps then they are really going too far.
  • by flug ( 589009 ) on Saturday March 07, 2026 @06:37PM (#66028632)

    It's pretty clear the "problem" the admin has with Anthropic is that Anthropic wants some minimal (probably very inadequate) guardrails on the use of their AI while the current admin wants nothing at all.

    Just let AI do the killing, all is well. (And surveillance, various forms of law-breaking and privacy violation, whatever . . . )

  • > In a tweet that's been viewed 1.3 million times in the last six hours, OpenAI's head of robotics announced their resignation.

    Were there two of them or are you referring to the corporeal Kalinowski and their/them virtual avatar.
  • A 1000 new candidates who could also care less based on the salary proposal.

  • We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons,

    Despite our statement, our head of robotics is not convinced.

  • How the frig can anyone have respect for Little Sammy? I'd like to say he lies like Trump does but that's a pretty high bar.

  • And has suddenly discovered "principles"? OK...

  • One tries to maintain the impression that one's military is not just the bunch of gung-ho gerrymandering mad bombers they could easily appear to be.

    It appears one wastes one's time. Them honkies want robots that kill for em. The whole suite of knock-on effects that follow from this attitude, pertain.

    It's been over a long time already folks. nothing left to do but the crying.

  • At meta? It ain't about ethics, or morals.

    It's about money.

  • Guardrails... does a semi with a double trailer/2 trailers (whatever you wanna call it) obey the rails on the side of the highway/freeway 100% of the time?
    Those are literally guides... it's totally possible to go beyond them in a semi... why is an LLM-AI any different?
    The proper thing to do is _do not hook it up to anything important (or critical)_... or just delete it, and do things yourself.

  • Anyone who still has "deep respect" for Sam Altman can claim no moral high ground.

    And what part of her principles allowed her to work in the first place for a closed-source for-profit company that was sucked out of an open-source non-profit ?
  • that is what you want, right, Hegseth?

  • Anyone who is genuinely confused about whether they are ONE person or MANY shouldn't be in charge of anything, much less AI robotics intended for use by the military. What nonsense. Good riddance.

  • They is a lesbian activist. Openai should be happy to be rid of her.

Diplomacy is the art of saying "nice doggy" until you can find a rock.

Working...