US Threatens Anthropic with 'Supply-Chain Risk' Designation. OpenAI Signs New War Department Deal (anthropic.com) 51
It started Friday when all U.S. federal agencies were ordered to "immediately cease" using Anthropic's AI technology after contract negotiations stalled when Anthropic requested prohibitions against mass domestic surveillance or fully autonomous weapons. But later Friday there were even more repercussions...
In a post to his 1.1 million followers on X.com, U.S. Secretary of War Pete Hegseth criticized Anthropic for what he called "a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon." Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic... Cloaked in the sanctimonious rhetoric of "effective altruism," [Anthropic and CEO Dario Amodei] have attempted to strong-arm the United States military into submission — a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable...
In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic... America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.
Meanwhile, Anthrophic said on Friday that "no amount of intimidation or punishment from the Department of War will change our position." (And "We will challenge any supply chain risk designation in court.") Designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government's classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so. We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government... Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement.
Anthropic also defended the two exceptions they'd requested that had stalled contract negotiations. "[W]e do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."
Also Friday, OpenAI announced that "we reached an agreement with the Department of War to deploy our models in their classified network." OpenAI CEO Sam Altman emphasized that the agreement retains and confirms OpenAI's own prohibitions against using their products for domestic mass surveillance — and requires "human responsibility" for the use of force including for autonomous weapon systems. "The Department of War agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the Department of War also wanted. " We are asking the Department of War to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
In a post to his 1.1 million followers on X.com, U.S. Secretary of War Pete Hegseth criticized Anthropic for what he called "a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon." Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic... Cloaked in the sanctimonious rhetoric of "effective altruism," [Anthropic and CEO Dario Amodei] have attempted to strong-arm the United States military into submission — a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable...
In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic... America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.
Meanwhile, Anthrophic said on Friday that "no amount of intimidation or punishment from the Department of War will change our position." (And "We will challenge any supply chain risk designation in court.") Designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government's classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so. We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government... Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement.
Anthropic also defended the two exceptions they'd requested that had stalled contract negotiations. "[W]e do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."
Also Friday, OpenAI announced that "we reached an agreement with the Department of War to deploy our models in their classified network." OpenAI CEO Sam Altman emphasized that the agreement retains and confirms OpenAI's own prohibitions against using their products for domestic mass surveillance — and requires "human responsibility" for the use of force including for autonomous weapon systems. "The Department of War agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the Department of War also wanted. " We are asking the Department of War to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
Sam Altman, American Hero! (Score:5, Interesting)
The only risk (Score:5, Insightful)
Re: (Score:1)
Re: (Score:2)
I don't particularly like any of them, but their incompetence is a smaller risk than the maliciousness (plus incompetence) of the current regime.
The Art of the Steal - I mean, Deal. (Score:5, Insightful)
The people in this Administration apparently like wielding sticks, not carrots, and anyone should be worried about them negotiating in good faith, especially given how they behaved during active negotiations with Venezuela and Iran.
Re: (Score:2, Insightful)
The response to any "deal" proposed by this regime should be "We don't negotiate with terrorists."
Humoring Trump at this point puts you somewhere between dumb and desperate. In failed states like Venezuela and Iran, that's just the ground they occupy. Why these other states continue to humor him boggles me, though.
They seem to assume the US still has a functioning, independent government, and TV-man's clown show is running on a separate track, a kind of social-media sideshow. That is no longer the case.
What am I missing here? (Score:5, Interesting)
Re:What am I missing here? (Score:5, Insightful)
There's also the possibility that Hegseth and friends have roughly the same understanding of 'AI' as your average dangerously clueless optimist taking medical advice from chatgpt; and genuinely believe that the techbros are holding out on them when it comes to developing skynet or the assorted near-miracles that the so called "Genesis Mission" is allegedly going to deliver; in which case they might believe that they are actually being denied a capability that they will want in the more or less near future; but my money would mostly be on it being an attempt to demonstrate dominance rather than a meaningful dispute.
The idea that it's a dominance play seems especially likely given that they are throwing around the threat of 'supply chain risk' designation; rather than going with the much more banal "RFP says we need 'AI' that can be used for killbots and agentic stasi, if your product doesn't do that it's not in the running'. It's not like the DoD doesn't buy tons of nonlethal products and services of various sorts all the time, mostly without incident, or normally makes any fuss about just not-buying products that don't meet their requirements; without threatening to blacklist the vendor. A 'power move' from people with the crudest and most puerile understanding of power.
Re:What am I missing here? (Score:5, Insightful)
I think there may be a reason, and it goes to the heart of why everyone hates Sam Altman: he's a fucking con man who'll make outrageous claims about what LLMs are capable far more than any honest purveyor would do.
The argument with Anthropic started, supposedly, not when Anthropic refused to do something, but when they were asked if their technology was capable of shooting down incoming Nuclear missiles, and Anthropic's CEO said no, that was way beyond the capabilities of the technology.
The stuff about Anthropic saying they wanted humans in any chain that leads to deaths came later, but this was the pivotal point at which the DoD started to question Anthropic about what their technology can do.
I can see Altman being the kind of used car salesman who'd claim that OpenAI is, indeed, capable of shooting down incoming nuclear missiles.
After he would have said that, anything else would have been icing on the cake. It doesn't matter any more that Anthropic was banned because they were "woke" or anything else, they had a different answer to the question that lead to the Hegseth concluding that Anthropic was "woke" in the first place.
Re: What am I missing here? (Score:3)
Re: (Score:2)
Re: (Score:3)
Re: (Score:3)
So this article makes it sound like OpenAI offered the exact same terms as Anthrophic, yet the latter is deemed a "supply-chain risk" whereas the former is fine?
You're exactly right. I think the conclusion that most people have come to is the government caved, but saved face by turning to OpenAI instead of openly conceding to Anthropic.
Re: (Score:2)
I don't think the government caved. I think this entirely came down to personalities. Someone at Anthropic made trump or his crony mad so they threw them overboard with prejudice. The unprecedented thing is to declare an American company a supply chain risk. The king has chosen the winner and it's not Anthropic. Between that and the recent Ellison oligarchy taking over the majority of cable news channels, there's no more free market now. Companies rise and fall according to the personal favor of the p
Re: (Score:2)
If the DoD wants killbots a vendor who doesn't do autonomous weapons is presumably not going to be the winning bid; but there's no plausible claim that someone using anthropic's stuff to puke out dashboards and reports for their ERP system rather than openAI's or Microsoft's stuff is any more or less able to produce whatever it is they sell to the DoD; unless it's specifically killbots or mass surveillance systems; but
Re: (Score:2)
The Department of Defense (DoD) is using "supply-chain risk" as a negotiating tactic, and by "negotiating" I man extortion . Anthropic has the product the DoD actually wants, but will not offer it on the terms the DoD prefers. The DoD is thereby threatening their relations with third-parties in ways that would be tortious interference if done by any other actor. They have also threatened to invoke the Defense Production Act to command Anthropic to give them their product.
The fact that the DoD can consider b
Didn't anyone watch Terminator? (Score:3)
Unleashing killbots hell-bent on destroying humanity would totally own the libs, obviously.
Unwarranted Outrage (Score:2)
Re:Unwarranted Outrage (Score:5, Informative)
Just not-buying something that doesn't suit your purposes would be normal; saying that none of the people you do business with can do business with the guy you have chosen not to do business with is both extreme and clearly intended to be punitive. "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic" Sure, because there's an obvious risk that someone who sells the DoD 5.56 or gauze might have javscript devs vibe-coding their website with anthropic rather than openAi tooling. Absurd, purely about trying to expand their ability to punish whoever they feel like.
Re: (Score:1)
Re: (Score:2)
Military said no thank you, we don't agree to those terms. Military now has a supply chain issue they need to solve.
Clearly there is no supply chain issue if OpenAI is stepping in. There are others as well so this is nothing more than a temper trantrum from the drunk assaulter.
Re: (Score:1)
Re: (Score:2)
Which they, the DoD, created. Anthropic's product was ready to go. They only had two conditions. The DoD chose not agree to those two conditions, conditions which were perfectly reasonable and would not in any way affect the contract.
Re: Unwarranted Outrage (Score:1)
Re: (Score:2)
The military wanted Anthropic tech as part of their supply chain. Anthropic said no, we won't provide unless you accept our terms. Military said no thank you, we don't agree to those terms.
The military had already accepted Anthropic's terms as part of their contract. They now want to alter the deal and Anthropic refused.
Military said no thank you, we don't agree to those terms. Military now has a supply chain issue they need to solve. If a company disrupts your supply chain, what's to prevent them from disrupting their other clients who have contracts with military.
If you change your mind and no longer agree to terms you already agreed to just cancel your contract and move on. There is no need for further drama. Going beyond this and retaliating is a blatant abuse of power.
Re:Unwarranted Outrage (Score:5, Insightful)
i sort of called this yesterday and the day before. it's not just the military and it's not just contract cancelled, it's throwing anthropic into a fair bit of trouble they might not be able to cope with. why? the modus operandi is pretty consistent: accept my outrageous and unacceptable demands or get crushed. this is not how you negotiate or void a contract, this is how you bully or go for the kill. it actually looks more like the alliances were solidified beforehand and this is a deliberate move to eliminate anthropic. openai stands to gain quite a bit from that, it gets the contract on top, it might eventually get the (not unsubstantial) crumbs of what is left of anthropic's work after the witch hunt and the administration gets its toys and sets the mood for everybody else.
ofc the administration might be miscalculating, that wouldn't be a first, so wait and see.
"lawful purpose" (Score:5, Insightful)
Re: (Score:2)
I actually have some (Score:2, Interesting)
Re: (Score:2)
"Military R&D takes place in secret behind layers of locked doors."
It's my understanding that Anthropic is the entity that will be doing the R&D and the military is the entity that will be using the results of said R&D.
Just like it's Lockheed (or whoever) that builds the plane and Air Force pilots who fly it.
Re: (Score:2)
Please don't call it "the dept of war" (Score:5, Insightful)
We all know that is yet another vanity title to please the orange king, which will get reverted when he dies/goes more insane/loses power.
Re: Please don't call it "the dept of war" (Score:2)
What Could Possibly Go Wrong? (Score:4, Interesting)
You're absolutely right to call me out on dropping bombs on Canada and destroying Toronto. I made a mistake, and I own it. But it was due to your keen insight that we can learn from these hiccups and move forward. That kind of sharp analysis is rare—and that makes you special.
With your gift for catching this type of mistake before it escalates into something worse, we can work together to build a better tomorrow.
For humans.
For AIs.
Forever.
Re: (Score:2)
The Real Reason (Score:3)
Trump objects to Anthropic's rules against autonomous use of AI in military operations. The real reason is that Trump wants systems of military offense that can operate without any person being held accountable for unnecessary civilian casualties.
Trump also objects Anthropic's rules against the use of AI for domestic surveillance. He wants to create the world described in the novel "1984".
End Game (Score:3)
The billionaires see this as the end game; anybody who has seriously thought about survival situations, especially the realistic ones where the public finally realizes all their modern problems are a result of wage theft that has made 1000s billionaires possible. It all ends up with their personal army being cult-like in their devotion which never was reliable even for god-approved kings. The 99% protests must have had them really thinking...
Trump is a Vindictive Narcissist. One of the worst things in the b
Welp, there it is (Score:2)
That's why Sam Altman will eventually be led away in handcuffs.
Anthropic, hang on till November (Score:2)
nope, not me (Score:2)
It's so much worse than that (Score:3)
OpenAI cofounder gave $25M to Trump's PAC in January (widely reported).
OpenAI CEO has been working on this deal since before Trump attacked Anthropic (according to the New York Times).
So this whole kerfuffle was really all about creating an excuse to give Brockman what he paid for.
And then some... The "supply chain risk" designation goes a lot further than just "we don't like your terms so we're not going to use your stuff." It also prevents Anthropic's customers from doing any business with the federal government. Brockman didn't just buy into a deal for military use of OpenAI tech, he bought crippling sanctions against his biggest competitor.
Welcome to oligarchy, everybody.
Stop calling it the "War Department" (Score:1)
years (Score:2)