Forgot your password?
typodupeerror
The Military United States AI Businesses Government

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic (x.com) 42

Saturday afternoon Sam Altman announced he'd start answering questions on X.com about OpenAI's work with America's Department of War — and all the developments over the past few days. (After that department's negotions had failed with Anthropic, they announced they'd stop using Anthropic's technology and threatened to designate it a "Supply-Chain Risk to National Security". Then they'd reached a deal for OpenAI's technology — though Altman says it includes OpenAI's own similar prohibitions against using their products for domestic mass surveillance and requiring "human responsibility" for the use of force in autonomous weapon systems.)

Altman said Saturday that enforcing that "Supply-Chain Risk" designation on Anthropic "would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.... We should all care very much about the precedent... To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it."

Altman also said that for a long time, OpenAI was planning to do "non-classified work only," but this week found the Department of War "flexible on what we needed..." Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.

I know what it's like to feel backed into a corner, and I think it's worth some empathy to the Department of War. They are... a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Question: Are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk...?

Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that...

Question: Why the rush to sign the deal ? Obviously the optics don't look great.

Sam Altman: It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good.

If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don't where it's going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years...

Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?

Sam Altman: [...] We believe in a layered approach to safety — building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one...

I think Anthropic may have wanted more operational control than we did...

Question: Were the terms that you accepted the same ones Anthropic rejected?

Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.

Question: Will you turn off the tool if they violate the rules?

Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

Questions were also answered by OpenAI's head of National Security Partnerships (who at one point posted that they'd managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI's deal with Department of War, "We control how we train the models and what types of requests the models refuse." Question: Are employees allowed to opt out of working on Department of War-related projects?

Answer: We won't ask employees to support Department of War-related projects if they don't want to.

Question: How much is the deal worth?

Answer: It's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact...

Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a 'threat to democratic values'?

Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract.

They also detailed OpenAI's position on LinkedIn: Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware...

Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.

U.S. law already constrains the worst outcomes. We accepted the "all lawful uses" language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate.

This discussion has been archived. No new comments can be posted.

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic

Comments Filter:
  • tl;dr (Score:5, Informative)

    by 93 Escort Wagon ( 326346 ) on Saturday February 28, 2026 @10:43PM (#66016436)

    Sam Altman is a weasel. A very rich weasel.

    • Re:tl;dr (Score:5, Informative)

      by TheMiddleRoad ( 1153113 ) on Sunday March 01, 2026 @12:44AM (#66016506)
      Truth. And no surprise he's spewing his bullshit on X, the village commons that a rich weasel handed over to the Nazis.
      • by jhoegl ( 638955 )
        "Fund your own personal AI based murder squad!"
  • by butt0nm4n ( 1736412 ) on Saturday February 28, 2026 @11:03PM (#66016444)

    It's being used for crime, child porn, scams, disinformation.

    It's strength is general purpose and that's it's weakness.

    In that respect it has the same devilish appeal of Social Media and Crypto. It's a good looking snake.

    You could say Altman's naive if he believes he can take on Uncle Sam. Mendacious is more likely. He's burning venture capital and making lots of unfounded promises that are just not bearing out in reality and producing low quality, industrialized content. We see this everywhere that hand made items are valued more than mass produced, and Gen AI is content mass production.

    LLM's are a really useful tool for augmenting work, but it's unreliable, the flaw in the kernel is that it depends on data from unreliable, inconsistent humans. We frequently lie, exaggerate, make mistakes and call truth opinions and belief. LLM generated content compounds our errors.

    "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." Dr. Ian Malcolm (Jeff Goldblum), Jurassic Park:

    • "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." Dr. Ian Malcolm (Jeff Goldblum), Jurassic Park:

      Spend billions of dollars of venture capital money to build the world's biggest chatbot? That is a definite yes. It's almost as great as using Libya uranium to make a bomb case filled with used pinball machine parts.

  • hmmm, (Score:5, Insightful)

    by Nicholas Grayhame ( 10502767 ) on Saturday February 28, 2026 @11:18PM (#66016448)

    Sam Altman: ... we believe the U.S. government is an institution that does its best to follow law and policy.

    Santa Claus is real too.

  • by hambone142 ( 2551854 ) on Saturday February 28, 2026 @11:27PM (#66016458)

    ChatGPT is on the verge of bankruptcy. They must still secure literally billions of dollars to keep from going bankrupt and that is unlikely to happen. Google is going to eat their lunch with their AI offering and Google is definitely financially-secure. Altman mentioned that eventually, they'll "ask ChatGPT how to turn a profit" some time in the future. It's going to be interesting.

    • You must have missed the part where OpenAI is a defense contractor now. Name one big defense contractor that went bankrupt. I'll be holding my breath while I wait.
      • by Martin Blank ( 154261 ) on Sunday March 01, 2026 @02:06AM (#66016534) Homepage Journal

        Grumman and McDonnell Douglas were saved from bankruptcy by mergers. It is very likely that other companies like Martin Marietta would have gone bankrupt post-Cold War save for mergers. Five major defense contractors were left out of around 50 previous major contractors. OpenAI may not go bankrupt, but that doesn't mean its independent future is secure.

        OpenAI is already facing serious headwinds. Its 2025 revenue was only $13 billion, but it expects 2030 revenue to be around $280 billion. Two years ago, it expected to invest $1.3 trillion in data centers, hardware, and model training, but a few weeks ago, that was cut to $600 million. It's losing money on most of its subscriptions, even the $200 Pro level. Its early technology edge is fading, with Anthropic and Google competing for the top spot. It had to push out ChatGPT 5.2 earlier than planned, and that wasn't much of an upgrade over 5.1. They're still by far the most popular AI brand, but that doesn't mean permanent success.

        • That plus Anthropic being a supply chain risk now. It puts them in Lizzo's enviable position of being "Too big to fail" IMHO.
        • Grumman and McDonnell Douglas were saved from bankruptcy by mergers.

          Based on what the execs did to Boeing after they got there, I think it would have been better to let them go under.

    • .. and suddenly Altman has the chance to suck on the teat of the Military-Industrial Complex in order to secure never-ending funding for his unprofitable sham, so of course he's taking it.

  • For normal people (Score:5, Informative)

    by karmawarrior ( 311177 ) on Saturday February 28, 2026 @11:30PM (#66016462) Journal

    1. The "Department of War" given in the summary is actually referring to the Department of Defense. The term "Department of War" is a nickname given by the Trump administration, it has no legal status.

    2. One way Musk dealt with the fact his thinned down Twitter development group was no longer able to maintain a scalable system was to ban people who don't have accounts with X from viewing threads. This reduces the load on X's servers slightly. Fortunately, third parties have filled the gap. You can read the thread here [xcancel.com].

    You're welcome.

  • ...and Sam Altman and OpenAI just strode confidently through that door and onto the stage
  • by magnetar513 ( 1384317 ) on Saturday February 28, 2026 @11:41PM (#66016470)
    "...but we believe the U.S. government is an institution that does its best to follow law and policy. "
  • by dsgrntlxmply ( 610492 ) on Saturday February 28, 2026 @11:53PM (#66016484)
    Camp follower follows camp. Film at 11.
  • Collaborator (Score:5, Insightful)

    by Rosco P. Coltrane ( 209368 ) on Sunday March 01, 2026 @03:25AM (#66016572)

    We said [that] to the Department of War

    It's Department of Defense, you fucking fascist collaborator.

    • by iNaya ( 1049686 )

      Sorry, but, the US has spent much more on aggressive actions than any sort of defense since WWII. DoW is more accurate, though I wish they would behave more like a defense force.

      Denying reality and thinking it really is about defense is more in line with being a fascist collaboration in my opinion.

      • Explain, in detail and with citations, where the offical name of that department was changed and through which legal mechanism.
      • > Denying reality and thinking it really is about defense...

        Where did you get that nonsense from???
  • by Computershack ( 1143409 ) on Sunday March 01, 2026 @05:06AM (#66016618)

    Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy.

    Clearly he hasn't been following the news for the last 14 months. Trump started breaking the law on day 1. ICE has followed suit growing more emboldened the more Trump and his lackeys have defended their illegal arrests and murder of US citizens.

  • by marcle ( 1575627 ) on Sunday March 01, 2026 @11:40AM (#66016830)

    Altman, like Musk, will say whatever it profits him to say. In fact, this seems to be true about political leaders too. For that reason, whenever I see his name in a headline, I immediately skip to the next article.

  • You've certainly stretched the definition of summary with The Fine Summary for this article...

    To paraphrase a line from the Simpson's movie, "I come here to comment, not to read!"

  • How much was the bribe? $25M? More?
  • The guy just looks and acts like Samuel Sterns from the hulk movies. Seemingly benevolent but secretly doing all of this very dangerous research without any safeguards.

Nothing will ever be attempted if all possible objections must be first overcome. -- Dr. Johnson

Working...