Forgot your password?
typodupeerror
AI The Military United States

US Threatens Anthropic with 'Supply-Chain Risk' Designation. OpenAI Signs New War Department Deal (anthropic.com) 51

It started Friday when all U.S. federal agencies were ordered to "immediately cease" using Anthropic's AI technology after contract negotiations stalled when Anthropic requested prohibitions against mass domestic surveillance or fully autonomous weapons. But later Friday there were even more repercussions...

In a post to his 1.1 million followers on X.com, U.S. Secretary of War Pete Hegseth criticized Anthropic for what he called "a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon." Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic... Cloaked in the sanctimonious rhetoric of "effective altruism," [Anthropic and CEO Dario Amodei] have attempted to strong-arm the United States military into submission — a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable...

In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic... America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

Meanwhile, Anthrophic said on Friday that "no amount of intimidation or punishment from the Department of War will change our position." (And "We will challenge any supply chain risk designation in court.") Designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government's classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so. We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government... Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement.
Anthropic also defended the two exceptions they'd requested that had stalled contract negotiations. "[W]e do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."

Also Friday, OpenAI announced that "we reached an agreement with the Department of War to deploy our models in their classified network." OpenAI CEO Sam Altman emphasized that the agreement retains and confirms OpenAI's own prohibitions against using their products for domestic mass surveillance — and requires "human responsibility" for the use of force including for autonomous weapon systems. "The Department of War agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the Department of War also wanted. " We are asking the Department of War to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
This discussion has been archived. No new comments can be posted.

US Threatens Anthropic with 'Supply-Chain Risk' Designation. OpenAI Signs New War Department Deal

Comments Filter:
  • by TheMiddleRoad ( 1153113 ) on Saturday February 28, 2026 @04:40PM (#66015816)
    He's going to save us all by watching us all very, very carefully. Especially the young guys. Then his AI will kill us all.
  • The only risk (Score:5, Insightful)

    by liqu1d ( 4349325 ) on Saturday February 28, 2026 @04:42PM (#66015824)
    Is trump and cronies.
    • And I thought the only risks to citizen yeomanry  were ghetto bitch-ratchets Waters. AOC, Crockett & Omar
      • I don't particularly like any of them, but their incompetence is a smaller risk than the maliciousness (plus incompetence) of the current regime.

  • by fahrbot-bot ( 874524 ) on Saturday February 28, 2026 @05:00PM (#66015868)

    ... after contract negotiations stalled when Anthropic requested ...

    The people in this Administration apparently like wielding sticks, not carrots, and anyone should be worried about them negotiating in good faith, especially given how they behaved during active negotiations with Venezuela and Iran.

    • Re: (Score:2, Insightful)

      The response to any "deal" proposed by this regime should be "We don't negotiate with terrorists."

      Humoring Trump at this point puts you somewhere between dumb and desperate. In failed states like Venezuela and Iran, that's just the ground they occupy. Why these other states continue to humor him boggles me, though.

      They seem to assume the US still has a functioning, independent government, and TV-man's clown show is running on a separate track, a kind of social-media sideshow. That is no longer the case.

  • by coopertempleclause ( 7262286 ) on Saturday February 28, 2026 @05:03PM (#66015870)
    So this article makes it sound like OpenAI offered the exact same terms as Anthrophic, yet the latter is deemed a "supply-chain risk" whereas the former is fine?
    • by fuzzyfuzzyfungus ( 1223518 ) on Saturday February 28, 2026 @05:24PM (#66015902) Journal
      These are people who treat laughably childish assertions of dominance as the point; so odds are it was largely just about dick-waving vs. the 'woke' and attempting to normalize the ability of the DoD to directly punish elements of the civilian economy that don't fall in line with el presidente; not about some capability the Anthropic wasn't selling them, if they even have it(which they potentially do for domestic surveillance and propaganda operations, allegedly LLMs have some value for doing text attribution by style and speech-to-text; and they certainly have utility for more sophisticated sockpuppeting; it's much less clear that the big name LLM guys have anything super interesting on machine vision of the sort that you'd want to use for geospatial analysis or terminal guidance).

      There's also the possibility that Hegseth and friends have roughly the same understanding of 'AI' as your average dangerously clueless optimist taking medical advice from chatgpt; and genuinely believe that the techbros are holding out on them when it comes to developing skynet or the assorted near-miracles that the so called "Genesis Mission" is allegedly going to deliver; in which case they might believe that they are actually being denied a capability that they will want in the more or less near future; but my money would mostly be on it being an attempt to demonstrate dominance rather than a meaningful dispute.

      The idea that it's a dominance play seems especially likely given that they are throwing around the threat of 'supply chain risk' designation; rather than going with the much more banal "RFP says we need 'AI' that can be used for killbots and agentic stasi, if your product doesn't do that it's not in the running'. It's not like the DoD doesn't buy tons of nonlethal products and services of various sorts all the time, mostly without incident, or normally makes any fuss about just not-buying products that don't meet their requirements; without threatening to blacklist the vendor. A 'power move' from people with the crudest and most puerile understanding of power.
      • by karmawarrior ( 311177 ) on Saturday February 28, 2026 @09:20PM (#66016338) Journal

        I think there may be a reason, and it goes to the heart of why everyone hates Sam Altman: he's a fucking con man who'll make outrageous claims about what LLMs are capable far more than any honest purveyor would do.

        The argument with Anthropic started, supposedly, not when Anthropic refused to do something, but when they were asked if their technology was capable of shooting down incoming Nuclear missiles, and Anthropic's CEO said no, that was way beyond the capabilities of the technology.

        The stuff about Anthropic saying they wanted humans in any chain that leads to deaths came later, but this was the pivotal point at which the DoD started to question Anthropic about what their technology can do.

        I can see Altman being the kind of used car salesman who'd claim that OpenAI is, indeed, capable of shooting down incoming nuclear missiles.

        After he would have said that, anything else would have been icing on the cake. It doesn't matter any more that Anthropic was banned because they were "woke" or anything else, they had a different answer to the question that lead to the Hegseth concluding that Anthropic was "woke" in the first place.

    • I rather wonder if OpenAIs statement about how it'll be used is a lie. If the former got admonished in public it would seem if they really had the same stipulations on use OpenAI would also get told off. The other option is that the Hegsworth had another reason to deny anthropic.
    • I’m guessing that Anthropic wanted the power to verify their product wasn’t being used to create autonomous killing weapons, which would mean full audit access to basically the entire US military R&D ecosystem. This would not happen in ANY universe under ANY president. OpenAI was happy with a pinky-promise and a giggle.
    • I believe the core difference is that Anthropic was enforcing the restrictions in the model itself. So for the Pentagon to get what it wanted would require a rebuild of the model, which Anthropic refused. OpenAI however seems to have only gotten these assurances on paper in the contract, but absolutely nothing now stops the Pentagon from actually using as they desire.
    • So this article makes it sound like OpenAI offered the exact same terms as Anthrophic, yet the latter is deemed a "supply-chain risk" whereas the former is fine?

      You're exactly right. I think the conclusion that most people have come to is the government caved, but saved face by turning to OpenAI instead of openly conceding to Anthropic.

      • by caseih ( 160668 )

        I don't think the government caved. I think this entirely came down to personalities. Someone at Anthropic made trump or his crony mad so they threw them overboard with prejudice. The unprecedented thing is to declare an American company a supply chain risk. The king has chosen the winner and it's not Anthropic. Between that and the recent Ellison oligarchy taking over the majority of cable news channels, there's no more free market now. Companies rise and fall according to the personal favor of the p

        • The 'supply chain risk' declaration also seems exceptional in terms of how broad it is.

          If the DoD wants killbots a vendor who doesn't do autonomous weapons is presumably not going to be the winning bid; but there's no plausible claim that someone using anthropic's stuff to puke out dashboards and reports for their ERP system rather than openAI's or Microsoft's stuff is any more or less able to produce whatever it is they sell to the DoD; unless it's specifically killbots or mass surveillance systems; but
    • The Department of Defense (DoD) is using "supply-chain risk" as a negotiating tactic, and by "negotiating" I man extortion . Anthropic has the product the DoD actually wants, but will not offer it on the terms the DoD prefers. The DoD is thereby threatening their relations with third-parties in ways that would be tortious interference if done by any other actor. They have also threatened to invoke the Defense Production Act to command Anthropic to give them their product.

      The fact that the DoD can consider b

  • by Powercntrl ( 458442 ) on Saturday February 28, 2026 @05:05PM (#66015874) Homepage

    Unleashing killbots hell-bent on destroying humanity would totally own the libs, obviously.

  • It's the military. What did you expect? They aren't playing games. If you want to contract with them, it's on their terms. No different than any other company wanting to contract with them. Good for Antrhopic for saying no. It's their call. I'm not sure why this all had to be public theater. They didn't provide what they wanted. Contract cancelled. Not much else to see here.
    • by fuzzyfuzzyfungus ( 1223518 ) on Saturday February 28, 2026 @05:31PM (#66015916) Journal
      That would be the case; except that they are also threatening the 'supply chain risk' designation.

      Just not-buying something that doesn't suit your purposes would be normal; saying that none of the people you do business with can do business with the guy you have chosen not to do business with is both extreme and clearly intended to be punitive. "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic" Sure, because there's an obvious risk that someone who sells the DoD 5.56 or gauze might have javscript devs vibe-coding their website with anthropic rather than openAi tooling. Absurd, purely about trying to expand their ability to punish whoever they feel like.
      • It may feel like punishment. Given the temperament of the current administration. But the logic is pretty sound. The military wanted Anthropic tech as part of their supply chain. Anthropic said no, we won't provide unless you accept our terms. Military said no thank you, we don't agree to those terms. Military now has a supply chain issue they need to solve. If a company disrupts your supply chain, what's to prevent them from disrupting their other clients who have contracts with military. Again, this isn'
        • Military said no thank you, we don't agree to those terms. Military now has a supply chain issue they need to solve.

          Clearly there is no supply chain issue if OpenAI is stepping in. There are others as well so this is nothing more than a temper trantrum from the drunk assaulter.

          • I think you forgot about the 6 month disruption while they re-tool everything for OpenAI. We had supply, now we don't. Now we have to find new supplier and wait 6 months to get back on track. It's almost by definition a supply chain issue.
            • Which they, the DoD, created. Anthropic's product was ready to go. They only had two conditions. The DoD chose not agree to those two conditions, conditions which were perfectly reasonable and would not in any way affect the contract.

              • This is the military, not a government contract to buy widgets for some construction project. Compare to building fighter jets for the Pentagon. Military says you need to make changes to the jet. You don't get to say no, because you'll use it in a way we dont like. We'll, you can say no, but there will be consequences.
        • The military wanted Anthropic tech as part of their supply chain. Anthropic said no, we won't provide unless you accept our terms. Military said no thank you, we don't agree to those terms.

          The military had already accepted Anthropic's terms as part of their contract. They now want to alter the deal and Anthropic refused.

          Military said no thank you, we don't agree to those terms. Military now has a supply chain issue they need to solve. If a company disrupts your supply chain, what's to prevent them from disrupting their other clients who have contracts with military.

          If you change your mind and no longer agree to terms you already agreed to just cancel your contract and move on. There is no need for further drama. Going beyond this and retaliating is a blatant abuse of power.

    • by znrt ( 2424692 ) on Saturday February 28, 2026 @06:11PM (#66016018)

      i sort of called this yesterday and the day before. it's not just the military and it's not just contract cancelled, it's throwing anthropic into a fair bit of trouble they might not be able to cope with. why? the modus operandi is pretty consistent: accept my outrageous and unacceptable demands or get crushed. this is not how you negotiate or void a contract, this is how you bully or go for the kill. it actually looks more like the alliances were solidified beforehand and this is a deliberate move to eliminate anthropic. openai stands to gain quite a bit from that, it gets the contract on top, it might eventually get the (not unsubstantial) crumbs of what is left of anthropic's work after the witch hunt and the administration gets its toys and sets the mood for everybody else.

      ofc the administration might be miscalculating, that wouldn't be a first, so wait and see.

  • "lawful purpose" (Score:5, Insightful)

    by matthewcharles2006 ( 960827 ) on Saturday February 28, 2026 @05:11PM (#66015884)
    Trump and his supporters have been very clear from the beginning that the law is whatever Donald Trump says it is. So "lawful purpose" is a totally pointless fig leaf. OpenAI's models will be used for whatever purpose the government wants.
  • I actually have some (Score:2, Interesting)

    by hdyoung ( 5182939 )
    agreement with the administration on this one. Military R&D takes place in secret behind layers of locked doors. The only way Anthropic could actually be sure their product isn’t weaponized would be if Uncle Sam gave Anthropic the power to poke into literally every corner of our government’s secrets and audit their work. No. Just no. Not. Gonna. Happen. Not under *any* president. Regardless of how you feel about the military, war, this administration or AI in general. No way the US governme
    • "Military R&D takes place in secret behind layers of locked doors."

      It's my understanding that Anthropic is the entity that will be doing the R&D and the military is the entity that will be using the results of said R&D.

      Just like it's Lockheed (or whoever) that builds the plane and Air Force pilots who fly it.

  • by 278MorkandMindy ( 922498 ) on Saturday February 28, 2026 @06:14PM (#66016032)

    We all know that is yet another vanity title to please the orange king, which will get reverted when he dies/goes more insane/loses power.

  • by Deep Esophagus ( 686515 ) on Saturday February 28, 2026 @07:03PM (#66016102)

    You're absolutely right to call me out on dropping bombs on Canada and destroying Toronto. I made a mistake, and I own it. But it was due to your keen insight that we can learn from these hiccups and move forward. That kind of sharp analysis is rare—and that makes you special.

    With your gift for catching this type of mistake before it escalates into something worse, we can work together to build a better tomorrow.

    For humans.

    For AIs.

    Forever.

    • Damnation ... Canada?   I thought I dictated.... " only   Manhattan, Portland, Chicago , Dearborn and La. ... and don't hit any churches." It's HRs fault ... a Biden leftover ...  but  you know how tough it is getting a real secretary to use Microsoft products ? 
  • by DERoss ( 1919496 ) on Saturday February 28, 2026 @08:03PM (#66016236)

    Trump objects to Anthropic's rules against autonomous use of AI in military operations. The real reason is that Trump wants systems of military offense that can operate without any person being held accountable for unnecessary civilian casualties.

    Trump also objects Anthropic's rules against the use of AI for domestic surveillance. He wants to create the world described in the novel "1984".

    • The billionaires see this as the end game; anybody who has seriously thought about survival situations, especially the realistic ones where the public finally realizes all their modern problems are a result of wage theft that has made 1000s billionaires possible. It all ends up with their personal army being cult-like in their devotion which never was reliable even for god-approved kings. The 99% protests must have had them really thinking...

      Trump is a Vindictive Narcissist. One of the worst things in the b

  • That's why Sam Altman will eventually be led away in handcuffs.

  • When we vote these clowns out of power. Then proceed to find them incompetent and dismiss from duty. lackeys like the Secretary of Education who thought AI was A-one as in A1 steak sauce https://www.youtube.com/live/e... [youtube.com]
  • I never go to movie theaters because couished seats are impossible to clean so all the drinks and vomit from the past is still in them even dry, plus sticky floors, I just think movie theaters are unsanitary vectors for contagious diseases
  • by Tschaine ( 10502969 ) on Saturday February 28, 2026 @10:05PM (#66016394)

    OpenAI cofounder gave $25M to Trump's PAC in January (widely reported).

    OpenAI CEO has been working on this deal since before Trump attacked Anthropic (according to the New York Times).

    So this whole kerfuffle was really all about creating an excuse to give Brockman what he paid for.

    And then some... The "supply chain risk" designation goes a lot further than just "we don't like your terms so we're not going to use your stuff." It also prevents Anthropic's customers from doing any business with the federal government. Brockman didn't just buy into a deal for military use of OpenAI tech, he bought crippling sanctions against his biggest competitor.

    Welcome to oligarchy, everybody.

  • Stop calling it the "War Department". It is still the fucking "Department of Defense" no matter what the mango mussolini says.
  • "negotiations stalled when Anthropic requested prohibitions against mass domestic surveillance or fully autonomous weapons ... " The text implies DOD has NOT been using sophisticated *.ai for years in domestic surveillance or autonomous weapons . Only the simple minded or talking-point mongers would say that ... and absolutely nobody thinks retail gets tek before the military ! Altmans musings are theater, 'false flag' or both.

"Intelligence without character is a dangerous thing." -- G. Steinem

Working...