Anthropic CEO Says AI Company 'Cannot In Good Conscience Accede' To Pentagon (apnews.com) 84
An anonymous reader quotes a report from the Associated Press: Anthropic CEO Dario Amodei said Thursday the artificial intelligence company "cannot in good conscience accede" to the Pentagon's demands to allow wider use of its technology. The maker of the AI chatbot Claude said in a statement that it's not walking away from negotiations, but that new contract language received from the Defense Department "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons."
The Pentagon's top spokesman has reiterated that the military wants to use Anthropic's artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands. Sean Parnell said Thursday on social media that the Pentagon "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement."
Anthropic's policies prevent its models, such as its chatbot Claude, from being used for those purposes. It's the last of its peers -- the Pentagon also has contracts with Google, OpenAI and Elon Musk's xAI -- to not supply its technology to a new U.S. military internal network. Parnell said the Pentagon wants to "use Anthropic's model for all lawful purposes" but didn't offer details on what that entailed. He said opening up use of the technology would prevent the company from "jeopardizing critical military operations." "We will not let ANY company dictate the terms regarding how we make operational decisions," he said. In a post on X, Parnell said Anthropic will "have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW."
The Pentagon's top spokesman has reiterated that the military wants to use Anthropic's artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands. Sean Parnell said Thursday on social media that the Pentagon "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement."
Anthropic's policies prevent its models, such as its chatbot Claude, from being used for those purposes. It's the last of its peers -- the Pentagon also has contracts with Google, OpenAI and Elon Musk's xAI -- to not supply its technology to a new U.S. military internal network. Parnell said the Pentagon wants to "use Anthropic's model for all lawful purposes" but didn't offer details on what that entailed. He said opening up use of the technology would prevent the company from "jeopardizing critical military operations." "We will not let ANY company dictate the terms regarding how we make operational decisions," he said. In a post on X, Parnell said Anthropic will "have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW."
Like the DoD really cares about legality... (Score:5, Insightful)
"...no interest in using AI to conduct mass surveillance of Americans (which is illegal) "
Gee, it's so nice that the government is so law-abiding. I can't think of any past occurrences of the US government conducting illegal, non-consensual mass surveillance of American citizens. Nope, not a single occurrence.
In all seriousness, that's exactly what they'd do. It'd just be PRISM 2.0, but with significantly more misinformation, thanks to hallucinating clankers being at the forefront of Big Brother.
Re: (Score:2, Interesting)
just imagine how quickly folks would lose faith in AI if they were held responsible for the kidnappings and murders an AI told them to do. Imagine the first time they accidentally wipe out someone related to a rich person, or the first time political winds shift in the middle of a process and they get kited. Imagine if they relied on 3rd hand rationalizations as justifications for crimes, as a substitute for actual legal advice... but the courts tell them that the AI isn't on trial...
I almost feel sorry for
Re:Like the DoD really cares about legality... (Score:4, Insightful)
And, if there's anything LLMs seem to be good at, it's generating results that look pretty plausible; so if you combine high plausibility narrative generation, a veneer of technological objectivity, and the downright servile deference to the official narrative, it would probably be even easier to beat the rap than it is now; when you can at least sometimes put the spotlight on someone clearly and distastefully letting their motives show or acting irrationally.
Re: (Score:2)
Re:Like the DoD really cares about legality... (Score:5, Insightful)
What on earth makes anyone think they care about laws?
Re: Like the DoD really cares about legality... (Score:1)
It would be terrifying to see the internal paranoia of someone in the military industrial complex reflected back and amplified by an LLM.
Weapons of Mass Distraction. (Score:1)
..In all seriousness, that's exactly what they'd do. It'd just be PRISM 2.0, but with significantly more misinformation, thanks to hallucinating clankers being at the forefront of Big Brother.
That misinformation, is for once a feature they are looking for.
The Epstein files dribbling out slower than a 90-year old man taking a piss. A Clinton suddenly willing to testify, albeit mostly about the 5th Amendment. A Government suddenly hinting at confirming UFOs and aliens after decades of Roswell denial and Area 51 secrecy. Watching a Speaker of the House publicly declare Congressional Insider Trading is nothing but a job perk. Electing a wholly and undeniably unqualified minority woman to become
Re: (Score:1)
Re: (Score:2)
Re:Like the DoD really cares about legality... (Score:5, Insightful)
Maybe, but maybe not. Anthropic was founded by people who left OpenAI due to ethical concerns, and who were 100% willing to remain a non-profit, after the unethical camp overcame them. If there's anyone in the industry who might stick to "no, we won't compromise on our ethics no matter what", it's them.
If they do remain ethical no matter what, what that might result from that is unclear. If the US government goes nuclear on them, classifying them as a fake "security risk" to force them to lose major clients as punishment, I can see, as one option, them moving to Europe, which is way more focused on ethics than the US, and currently lacks a major AI player, and then letting the US government munch on the slow realization the US lost the current best AI tech stack to the very same future major geopolitical adversary they've decided to construct just because.
Re: (Score:2)
Re: (Score:2)
the only countries that do have a track record of standing up the the US are called the axis of evil.
That was the case until 2024, when Europe didn't have a need to stand against the US as the US was a committed ally of the EU. Since early 2025, alas, the US started progressively standing more and more against the EU, so now the EU sees the urgent need to protect themselves from US political, economic, and military aggression.
Re: (Score:2)
And Europe doesn't exactly have a track record of standing up to the US.
It is not about standing up to the US. It is about protecting corporations that want to retain at least some ethics. And punishing [europa.eu] those, that don't.
Re: (Score:2)
Everyone else has fallen in line. Anthropic will, too.
Forcing people to be your friend at gunpoint may have the desired effect on the surface, but only if you don't think too deeply.
Re: (Score:2)
Re: (Score:3, Interesting)
They are probably thinking about the longer term consequences too. If Trump leaves office, there may be a reckoning. Legal consequences for people who engaged in illegal behaviour.
Re: Like the DoD really cares about legality... (Score:2, Insightful)
Legal consequences? Like the Jan 6 rioters who got their get out of jail free card? Heck more than free, because someone wants to pay them for their service.
Re: (Score:2)
That happened because Trump was in office, this is about what happens if Trump is not in office.
Re: (Score:1)
That was Trump. And don't worry, because many of them found themselves right back in jail committing other crimes (many of whom believed being pardoned was a get-out-of-jail-free card for life when they were arrested).
Most of the ICE crew though were from from J6 at all - they are all those right wing militias that always stroked the 2A because insurrection. Funny enough h
Wow, scary (Score:5, Insightful)
This is scary for two reasons. The government wants all guardrails off of unproven technology that can seriously screw things up and the government seriously thinks "AI" is that good. It's the dumb leading the blind.
Re: Wow, scary (Score:5, Insightful)
It isn't like this is an accidental attitude, that very company has been spamming us with advertising telling us pretty much how infallible they are for some months now.
They should have considered that the decision makers both in business and government are morons.
But, just like with the sales to Chinese companies they've been complaining about, common sense can't be allowed in the way of making a sale.
Re: (Score:2)
It isn't like this is an accidental attitude, that very company has been spamming us with advertising telling us pretty much how infallible they are for some months now.
That is an accidental attitude. I don't even understand what you're trying to imply here.
The default assumption, by literally everyone, is that if it's in an ad, it's not a statement to be trusted. Ads are *by nature* untrustworthy, they are a biased view meant to get you to be interested in the product. It's up to the person with the wallet to then do actual research, and they are literally the only person to blame if they trust the ad. If the ads were telling you the limitation of the product, then the pe
You americans make me laugh (Score:1)
Scary how? Facial recognition is already used here in the UK and has been for a few years:
https://www.gov.uk/government/... [www.gov.uk]
Its no different to what was before which was a load of people watching CCVTV screens. And guess what - a load of criminals have been nicked because of it. But hey, Freedom to ... [something[! More important, right?
As for AI weapons ... err , the military have been using image and other types of recognition for targeting systems for a loooong time. Its going to move to smaller footprint
Re: (Score:2)
Image classification for tracking and terminal guidance is at least 40 years old, and I can personally attest to that. Specifics are, however, covered by the Official Secrets Act. The field of AI covers more than just neural networks.
Re: (Score:1)
People are distracted and don't realize what you said is true. This is more about a corporation being able to dictate things to DOW, which, reminds of tank girl. Water and power. They have the water, and they have all the power. A corporation dictates everything including military use.
Military already has automated targeting and trigger pulling things. This is just a bid for corporations to get more power, but, people are distracted thinking these things don't happen already and would only happen if the cor
doublespeak, we're not stupid sean (Score:5, Insightful)
Sean Parnell said Thursday on social media that the Pentagon "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement."
if that was true your contract proposal would reflect it and this wouldn't be a story
Re: doublespeak, we're not stupid sean (Score:2)
This.
Re:doublespeak, we're not stupid sean (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
Doublespeak also works for Anthropic. They cannot in good conscience accede to Pentagon's demands, but maybe they can, in bad conscience? Does a company have a conscience anyway? Does AI have a conscience? If AI does not have a conscience, maybe it can acceede to Pentagon's demands?
Re: (Score:3)
Whatever you think of their honesty, the phrase has a single meaning, which is "we're not going to do it" with the additional pretty obvious inference "because it is unethical".
Your post isn't merely pedantry, it's just willful denial of what ultimately is a very clear and unambiguous statement.
Re: (Score:2)
Indeed. They are not even competent liars....
Re: doublespeak, we're not stupid sean (Score:1)
Probably they mean they want to use autonomous weapons which operate with some tenuous level of human supervision - otherwise why the self-conflicting qualification?
Re: (Score:2)
I do find insightful this quote from Geoffrey Gertz, a senior fellow at the Center for a New American Security, regarding the threat to deem Anthropic a supply chain risk:
Re: (Score:2)
Odd to root for one of these shops, but (Score:5, Interesting)
Either threat is pretty serious - nevermind that they're utterly contradictory.
Kegseth isn't going to invoke the DPA. They seem to get away with crazy shit that just hurts people, but they're trying not to spook the market, and going old-school socialist in peacetime is a bit much.
I can see this regime blacklisting them from DoD procurement, which would limit their "addressable market", as the tweedlers say. It would give them cred with certain segments, but that trade won't be revenue-positive.
So that's a tough decision to make, especially given the shamefully shitbaggy baseline of his aristo-wannabe peers at other tech firms.
So good on Anthorpic.
Re:Odd to root for one of these shops, but (Score:5, Insightful)
They just need to hang in there.
And for all my AI skepticism, I'm growing to like Anthropic.
Hegseth is threatening to list them as a "procurement risk" which is a really dangerous thing for a country that puts them in the same category as companies like Huwei and could lose them billions in private sector contracts (Ie AWS couldnt hose Claude for clients etc). However I also suspect that the way the govt has been phrasing this as a "Comply or we do this horrible thing to you" would make any such designation invalid if reviewed by a judge since your not allowed to blackmail people and creating Bills of attainder is highly unconstitutional. Not to mention a violation of Anthropics 1st ammendment rights. So its a hollow threat, but one that could spook shareholders badly
Re:Odd to root for one of these shops, but (Score:5, Insightful)
However I also suspect that the way the govt has been phrasing this as a "Comply or we do this horrible thing to you" would make any such designation invalid if reviewed by a judge since your not allowed to blackmail people and creating Bills of attainder is highly unconstitutional.
We've seen the current administration do this sort of thing many times. They're not concerned about legality or even care whether they'll win the legal fight at all; they figure (correctly, as we've seen) most companies aren't willing to put up with the 12-24 months of pain with no profits plus large lawyer bills.
Not to mention the Trump-favored approach "we're putting massive tariffs on this company because it's a huge national security risk", followed later by "we're taking the tariffs off because they paid us some large amount of money / gave us partial ownership of their company / contributed to the Trump fortune in some way". It really is a grift, top to bottom.
Re:Odd to root for one of these shops, but (Score:4, Interesting)
The thing with the Trump regime though, is that half of their stupid shit gets overturned and cancelled in the courts. They really arent a particularly effective government in that respect.
If they do try and slap Anthropic with that designation, Anthropic are a multi decabillion dollar company with armies of lawyers who'll get that shit overturned within the week.
But amongst the spiral eyed TESCREAL obsessed cult running silicon valley, thats still going to be a painful thing to go thru
Re:Odd to root for one of these shops, but (Score:5, Insightful)
Re:Odd to root for one of these shops, but (Score:4, Informative)
The cost of overturned stupid shit is borne by US taxpayers. So it's a win-win for Trump and The Government.
At this point he's openly stealing from US taxpayers, by paying billions to his Bored of Peace which goes directly into his accounts (probably offshore for tax evasion, which is defacto legal for rich people in the US, Leona Helmsley was right 35 years ago).
Re: (Score:2)
Re: (Score:2)
And for all my AI skepticism, I'm growing to like Anthropic.
Don't be too quick to judge. It isn't a fight of good versus evil, it is a fight for control of the product and the revenue stream.
Re: (Score:2)
Kegseth isn't going to invoke the DPA. They seem to get away with crazy shit that just hurts people, but they're trying not to spook the market, and going old-school socialist in peacetime is a bit much.
saw this too:
It was not immediately clear how the Pentagon intends to label Anthropic a supply chain risk — which typically requires the government and its contractors to cut ties with that company — while simultaneously invoking the Defense Production Act to compel the company to cooperate with the Pentagon.
https://www.politico.com/news/... [politico.com]
Re:Odd to root for one of these shops, but (Score:5, Insightful)
It was not immediately clear how the Pentagon intends to label Anthropic a supply chain risk — which typically requires the government and its contractors to cut ties with that company — while simultaneously invoking the Defense Production Act to compel the company to cooperate with the Pentagon.
Anthropic CEO Amodei's response to this was priceless. From TFA:
Amodei said Thursday that “those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
Christ on a bun. And it gets worse:
Emil Michael, [DoD] undersecretary for research and engineering, later lashed out at the Anthropic CEO, alleging on X that Amodei “has a God-complex” and “wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.”
Whoa Emil, stop clutching your pearls, you'll choke yourself. Amodei is trying to uphold the policies of his company regarding the use of their product, and you think that means he's " try[ing] to personally control the US Military and is ok putting our nation’s safety at risk"? Dude, this is a democracy. You have access to many military contractors. Perhaps Anthropic is not the one for you -- excuse me, not the one for what you think the military needs to do.
Re: (Score:3)
Emil Michael, [DoD] undersecretary for research and engineering, later lashed out at the Anthropic CEO, alleging on X that Amodei âoehas a God-complexâ and âoewants nothing more than to try to personally control the US Military and is ok putting our nationâ(TM)s safety at risk.â
Emil, stop clutching your pearls, you'll choke yourself. Amodei is trying to uphold the policies of his company regarding the use of their product, and you think that means he's " try[ing] to personally control the US Military and is ok putting our nationâ(TM)s safety at risk"?
It's simple, their every accusation is a confession. "Not me, you."
Re: (Score:3)
So good on Anthorpic.
there is this too:
Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon
https://www.cnn.com/2026/02/25... [cnn.com]
Re: (Score:2)
Interesting. The article headline doesn't really reflect the content. Paramount may be poised to buy Warner (and CNN along with it) but it hasn't happened yet, has it?
Re: (Score:3)
Kegseth isn't going to invoke the DPA. They seem to get away with crazy shit that just hurts people, but they're trying not to spook the market, and going old-school socialist in peacetime is a bit much.
It's not just hard-core socialists who want to control the means of production. Fascists want to do the same thing, but for a very different reason: to implement and support a permanent, heightened military posture for the nation. What view would fit in this case?
Re: (Score:2)
So good on Anthorpic.
on the surface, yes. but that might just end up meaning anthropic would not be collaborating ... publicly. after all, they have already collaborated in illegal operations, maybe not willingly but even then with gross naivety: you don't just claim to be a safeguard and then partner with the pentagon and palantir assuming they're white knights.
ofc after these outlandish threats have been made it would be now difficult to maintain a good reputation if these don't materialize. nobody could trust anthropic eithe
Flip flop (Score:1, Troll)
Re:Flip flop (Score:5, Informative)
As long as they stay in the news they're happy with whatever it seems. Although I back their latest decision for however long it lasts.
They didn't flip flop. They changed their position on one aspect of AI security, while holding the line on a different aspect. It's like if you decided that you were willing to leave your car doors unlocked, but refused to leave your house unlocked. Different things, different risk calculations.
Re: Flip flop (Score:2)
Re: (Score:2)
Yeah I just reread that article and now realise I screwed up. I thought them dropping their safeguards was to allow them to join the program. My apologies I now see it's unrelated.
I think you were far from alone in that. The comments on the article about Anthropic relaxing their safety commitment showed a lot of people thought it was DoD-related.
Can AI convince Hegseth? (Score:1)
If AI is so good with words and so smart, why can't it convince Hegseth not to spy on Americans?
Re: Can AI convince Hegseth? (Score:2)
Reason cannot be used to influence the unreasonable.
keep in mind... (Score:3)
Anthropic's policies prevent its models, such as its chatbot Claude, from being used for those purposes. It's the last of its peers -- the Pentagon also has contracts with Google, OpenAI and Elon Musk's xAI -- to not supply its technology to a new U.S. military internal network.
...Claude is the only one they have a contract to use in classified situations.
https://www.thebulwark.com/p/a... [thebulwark.com]
Re: (Score:3)
Holy crap! (Score:4, Interesting)
A Big Tech company - and an AI one at that - is taking a principled stance that both loses them income and makes an enemy of the government and the military. Colour me shocked.
I understand that they're a public benefit company - but they could flip on that in a heartbeat, and haven't. The jaded skeptic in me keeps looking for the angle here, but my inner child-like optimist is happy just to take Anthropic's actions at face value.
Re: (Score:2)
Frankly, this all looks a bit like grandstanding on the part of the US government. Or maybe just general cluelessness.
The US government is more than capable, and has plenty of funding, to just set up a team that grabs an open source model like .. hmm.. say DeepSeek R2. And gets it to do what they want with a few agents. They have the money, the data centers (or they can rent those), and the technical skill to do all they want.
Not sure why the haven't.
I'm not a big believer in the "dangers of AI".. except of
Lol (Score:3)
Re: (Score:2)
I imagine a smart-trolley at the supermarket telling comrade fatty that he doesn't need that box of donuts.
Uh huh (Score:2)
but that new contract language received from the Defense Department "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons."
Also . . .
The Pentagon's top spokesman has reiterated that the military wants to use Anthropic's artificial intelligence technology in legal ways and will not let the company dictate any limits
If the DoD wants to use Anthropic AI in legal ways there shouldn't be a reason for them to agree to the guardrail preventing ma
Maslow's heirarchy of needs (Score:1)
https://en.wikipedia.org/wiki/... [wikipedia.org]
Let's translate into political philosophy, for fun and profit.
The base of the pyramid is basic physiological needs. In terms of statecraft and economics...don't mess with the food supply and refrain from messing with water and electricity.
Right above that is physical safety. That means a lot of things, but what it does *not* mean is defunding the police or intentionally hobbling your military.
Way up in the stratosphere where only ivory towers can reach is "self actualization
Yeah right (Score:2)
Pentagon flex and AI collapses (Score:2)
If the Pentagon flexes on Anthropic by blackballing them it could put the entire economy into a tailspin. So many companies rely on them (yes, too big to fail - in terms of AI might). We need TACO to the rescue or we're all screwed.
let's play global thermonuclear war (Score:2)
what side do you want?
mass survailance (which is illega)l... *wink* (Score:1)
They already did! (Score:2)
Gee. Something akin to hypocrisy. (Score:2)
It's almost that since corporations are people, then much like demanding that they decorate a gay wedding cake, you shouldn't be able to demand they participate in something they find abhorrent.
They know. (Score:2)
They know just how hallucinatory the chatbots are, and likely to kill their own people.
I mean, Kegsbreath thinks no US military ever retreats for any reason... ( see "Waist Deep in the Big Muddy" https://www.youtube.com/watch [youtube.com]?... [youtube.com]
tail (Score:2)
Tail wagging the dog. Some company thinks it can dictate US defense policy.