Forgot your password?
typodupeerror
The Military

Anthropic CEO Says AI Company 'Cannot In Good Conscience Accede' To Pentagon (apnews.com) 84

An anonymous reader quotes a report from the Associated Press: Anthropic CEO Dario Amodei said Thursday the artificial intelligence company "cannot in good conscience accede" to the Pentagon's demands to allow wider use of its technology. The maker of the AI chatbot Claude said in a statement that it's not walking away from negotiations, but that new contract language received from the Defense Department "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons."

The Pentagon's top spokesman has reiterated that the military wants to use Anthropic's artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands. Sean Parnell said Thursday on social media that the Pentagon "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement."

Anthropic's policies prevent its models, such as its chatbot Claude, from being used for those purposes. It's the last of its peers -- the Pentagon also has contracts with Google, OpenAI and Elon Musk's xAI -- to not supply its technology to a new U.S. military internal network. Parnell said the Pentagon wants to "use Anthropic's model for all lawful purposes" but didn't offer details on what that entailed. He said opening up use of the technology would prevent the company from "jeopardizing critical military operations." "We will not let ANY company dictate the terms regarding how we make operational decisions," he said.
In a post on X, Parnell said Anthropic will "have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW."
This discussion has been archived. No new comments can be posted.

Anthropic CEO Says AI Company 'Cannot In Good Conscience Accede' To Pentagon

Comments Filter:
  • by TronNerd82 ( 9588972 ) on Thursday February 26, 2026 @11:51PM (#66012972)

    "...no interest in using AI to conduct mass surveillance of Americans (which is illegal) "

    Gee, it's so nice that the government is so law-abiding. I can't think of any past occurrences of the US government conducting illegal, non-consensual mass surveillance of American citizens. Nope, not a single occurrence.

    In all seriousness, that's exactly what they'd do. It'd just be PRISM 2.0, but with significantly more misinformation, thanks to hallucinating clankers being at the forefront of Big Brother.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      just imagine how quickly folks would lose faith in AI if they were held responsible for the kidnappings and murders an AI told them to do. Imagine the first time they accidentally wipe out someone related to a rich person, or the first time political winds shift in the middle of a process and they get kited. Imagine if they relied on 3rd hand rationalizations as justifications for crimes, as a substitute for actual legal advice... but the courts tell them that the AI isn't on trial...

      I almost feel sorry for

      • by fuzzyfuzzyfungus ( 1223518 ) on Friday February 27, 2026 @01:26AM (#66013050) Journal
        I'd like to be more optimistic; but I'm not sure that would be the outcome. It's not like we actually expect operating under actual legal guidance (indeed, we come up with insulting nonsense like standards for 'qualified immunity' that basically let you off unless it's exceptionally obvious that you are operating against well established practice; and commonly just substitute things like 'acted according to policy and training' for questions of whether the policy and training reflect legal practice or not when it comes to even the excessive force cases that actually make it to trial.

        And, if there's anything LLMs seem to be good at, it's generating results that look pretty plausible; so if you combine high plausibility narrative generation, a veneer of technological objectivity, and the downright servile deference to the official narrative, it would probably be even easier to beat the rap than it is now; when you can at least sometimes put the spotlight on someone clearly and distastefully letting their motives show or acting irrationally.
    • ..In all seriousness, that's exactly what they'd do. It'd just be PRISM 2.0, but with significantly more misinformation, thanks to hallucinating clankers being at the forefront of Big Brother.

      That misinformation, is for once a feature they are looking for.

      The Epstein files dribbling out slower than a 90-year old man taking a piss. A Clinton suddenly willing to testify, albeit mostly about the 5th Amendment. A Government suddenly hinting at confirming UFOs and aliens after decades of Roswell denial and Area 51 secrecy. Watching a Speaker of the House publicly declare Congressional Insider Trading is nothing but a job perk. Electing a wholly and undeniably unqualified minority woman to become

      • by Anonymous Coward
        Pam Bondi needs to be impeached. All the people who aren't doing their jobs and releasing the Epstein files need to be fired. The people who redacted things illegally, charged, and then fired.
    • Everyone else has fallen in line. Anthropic will, too.
      • by alexgieg ( 948359 ) <alexgieg@gmail.com> on Friday February 27, 2026 @10:36AM (#66013584)

        Maybe, but maybe not. Anthropic was founded by people who left OpenAI due to ethical concerns, and who were 100% willing to remain a non-profit, after the unethical camp overcame them. If there's anyone in the industry who might stick to "no, we won't compromise on our ethics no matter what", it's them.

        If they do remain ethical no matter what, what that might result from that is unclear. If the US government goes nuclear on them, classifying them as a fake "security risk" to force them to lose major clients as punishment, I can see, as one option, them moving to Europe, which is way more focused on ethics than the US, and currently lacks a major AI player, and then letting the US government munch on the slow realization the US lost the current best AI tech stack to the very same future major geopolitical adversary they've decided to construct just because.

        • My point is not so much about the integrity of the Anthropic people, but about the habitual ability of the US govt to make offers that can't be refused. And Europe doesn't exactly have a track record of standing up to the US. In fact, the only countries that do have a track record of standing up the the US are called the axis of evil. So there's that.
          • the only countries that do have a track record of standing up the the US are called the axis of evil.

            That was the case until 2024, when Europe didn't have a need to stand against the US as the US was a committed ally of the EU. Since early 2025, alas, the US started progressively standing more and more against the EU, so now the EU sees the urgent need to protect themselves from US political, economic, and military aggression.

          • by Bumbul ( 7920730 )

            And Europe doesn't exactly have a track record of standing up to the US.

            It is not about standing up to the US. It is about protecting corporations that want to retain at least some ethics. And punishing [europa.eu] those, that don't.

      • Everyone else has fallen in line. Anthropic will, too.

        Forcing people to be your friend at gunpoint may have the desired effect on the surface, but only if you don't think too deeply.

    • Re: (Score:3, Interesting)

      by AmiMoJo ( 196126 )

      They are probably thinking about the longer term consequences too. If Trump leaves office, there may be a reckoning. Legal consequences for people who engaged in illegal behaviour.

      • Legal consequences? Like the Jan 6 rioters who got their get out of jail free card? Heck more than free, because someone wants to pay them for their service.

        • That happened because Trump was in office, this is about what happens if Trump is not in office.

        • by tlhIngan ( 30335 )

          Like the Jan 6 rioters who got their get out of jail free card? Heck more than free, because someone wants to pay them for their service.

          That was Trump. And don't worry, because many of them found themselves right back in jail committing other crimes (many of whom believed being pardoned was a get-out-of-jail-free card for life when they were arrested).

          Most of the ICE crew though were from from J6 at all - they are all those right wing militias that always stroked the 2A because insurrection. Funny enough h

  • Wow, scary (Score:5, Insightful)

    by RitchCraft ( 6454710 ) on Thursday February 26, 2026 @11:55PM (#66012974)

    This is scary for two reasons. The government wants all guardrails off of unproven technology that can seriously screw things up and the government seriously thinks "AI" is that good. It's the dumb leading the blind.

    • Re: Wow, scary (Score:5, Insightful)

      by Mr. Dollar Ton ( 5495648 ) on Friday February 27, 2026 @02:08AM (#66013066)

      It isn't like this is an accidental attitude, that very company has been spamming us with advertising telling us pretty much how infallible they are for some months now.

      They should have considered that the decision makers both in business and government are morons.

      But, just like with the sales to Chinese companies they've been complaining about, common sense can't be allowed in the way of making a sale.

      • It isn't like this is an accidental attitude, that very company has been spamming us with advertising telling us pretty much how infallible they are for some months now.

        That is an accidental attitude. I don't even understand what you're trying to imply here.

        The default assumption, by literally everyone, is that if it's in an ad, it's not a statement to be trusted. Ads are *by nature* untrustworthy, they are a biased view meant to get you to be interested in the product. It's up to the person with the wallet to then do actual research, and they are literally the only person to blame if they trust the ad. If the ads were telling you the limitation of the product, then the pe

    • Scary how? Facial recognition is already used here in the UK and has been for a few years:

      https://www.gov.uk/government/... [www.gov.uk]

      Its no different to what was before which was a load of people watching CCVTV screens. And guess what - a load of criminals have been nicked because of it. But hey, Freedom to ... [something[! More important, right?

      As for AI weapons ... err , the military have been using image and other types of recognition for targeting systems for a loooong time. Its going to move to smaller footprint

      • by jabuzz ( 182671 )

        Image classification for tracking and terminal guidance is at least 40 years old, and I can personally attest to that. Specifics are, however, covered by the Official Secrets Act. The field of AI covers more than just neural networks.

      • by Tyr07 ( 8900565 )

        People are distracted and don't realize what you said is true. This is more about a corporation being able to dictate things to DOW, which, reminds of tank girl. Water and power. They have the water, and they have all the power. A corporation dictates everything including military use.

        Military already has automated targeting and trigger pulling things. This is just a bid for corporations to get more power, but, people are distracted thinking these things don't happen already and would only happen if the cor

  • by Nicholas Grayhame ( 10502767 ) on Friday February 27, 2026 @12:00AM (#66012982)

    Sean Parnell said Thursday on social media that the Pentagon "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement."

    if that was true your contract proposal would reflect it and this wouldn't be a story

    • by martin-boundary ( 547041 ) on Friday February 27, 2026 @02:48AM (#66013106)
      There's no need to operate illegally if the law can be changed to make formerly illegal things legal going forward.
    • Doublespeak also works for Anthropic. They cannot in good conscience accede to Pentagon's demands, but maybe they can, in bad conscience? Does a company have a conscience anyway? Does AI have a conscience? If AI does not have a conscience, maybe it can acceede to Pentagon's demands?

      • Whatever you think of their honesty, the phrase has a single meaning, which is "we're not going to do it" with the additional pretty obvious inference "because it is unethical".

        Your post isn't merely pedantry, it's just willful denial of what ultimately is a very clear and unambiguous statement.

    • by gweihir ( 88907 )

      Indeed. They are not even competent liars....

    • Probably they mean they want to use autonomous weapons which operate with some tenuous level of human supervision - otherwise why the self-conflicting qualification?

    • by msauve ( 701917 )
      Actually, the quote began "The Department of War has no interest in using AI to conduct mass surveillance of Americans...". The FBI, CIA, INR, OICI, DHS on the other hand...

      I do find insightful this quote from Geoffrey Gertz, a senior fellow at the Center for a New American Security, regarding the threat to deem Anthropic a supply chain risk:

      It's this funny mix where they both are such a risk that they need to be kicked out of all systems, and so essential that they need to be compelled to be part of the

  • by abulafia ( 7826 ) on Friday February 27, 2026 @12:03AM (#66012986)
    I'm honestly surprised.

    Either threat is pretty serious - nevermind that they're utterly contradictory.

    Kegseth isn't going to invoke the DPA. They seem to get away with crazy shit that just hurts people, but they're trying not to spook the market, and going old-school socialist in peacetime is a bit much.

    I can see this regime blacklisting them from DoD procurement, which would limit their "addressable market", as the tweedlers say. It would give them cred with certain segments, but that trade won't be revenue-positive.

    So that's a tough decision to make, especially given the shamefully shitbaggy baseline of his aristo-wannabe peers at other tech firms.

    So good on Anthorpic.

    • by sg_oneill ( 159032 ) on Friday February 27, 2026 @12:45AM (#66013014)

      They just need to hang in there.

      And for all my AI skepticism, I'm growing to like Anthropic.

      Hegseth is threatening to list them as a "procurement risk" which is a really dangerous thing for a country that puts them in the same category as companies like Huwei and could lose them billions in private sector contracts (Ie AWS couldnt hose Claude for clients etc). However I also suspect that the way the govt has been phrasing this as a "Comply or we do this horrible thing to you" would make any such designation invalid if reviewed by a judge since your not allowed to blackmail people and creating Bills of attainder is highly unconstitutional. Not to mention a violation of Anthropics 1st ammendment rights. So its a hollow threat, but one that could spook shareholders badly

      • by 93 Escort Wagon ( 326346 ) on Friday February 27, 2026 @01:30AM (#66013052)

        However I also suspect that the way the govt has been phrasing this as a "Comply or we do this horrible thing to you" would make any such designation invalid if reviewed by a judge since your not allowed to blackmail people and creating Bills of attainder is highly unconstitutional.

        We've seen the current administration do this sort of thing many times. They're not concerned about legality or even care whether they'll win the legal fight at all; they figure (correctly, as we've seen) most companies aren't willing to put up with the 12-24 months of pain with no profits plus large lawyer bills.

        Not to mention the Trump-favored approach "we're putting massive tariffs on this company because it's a huge national security risk", followed later by "we're taking the tariffs off because they paid us some large amount of money / gave us partial ownership of their company / contributed to the Trump fortune in some way". It really is a grift, top to bottom.

        • by sg_oneill ( 159032 ) on Friday February 27, 2026 @02:40AM (#66013098)

          The thing with the Trump regime though, is that half of their stupid shit gets overturned and cancelled in the courts. They really arent a particularly effective government in that respect.

          If they do try and slap Anthropic with that designation, Anthropic are a multi decabillion dollar company with armies of lawyers who'll get that shit overturned within the week.

          But amongst the spiral eyed TESCREAL obsessed cult running silicon valley, thats still going to be a painful thing to go thru

          • by martin-boundary ( 547041 ) on Friday February 27, 2026 @02:52AM (#66013108)
            The cost of overturned stupid shit is borne by US taxpayers. So it's a win-win for Trump and The Government.
            • by mjwx ( 966435 ) on Friday February 27, 2026 @08:19AM (#66013344)

              The cost of overturned stupid shit is borne by US taxpayers. So it's a win-win for Trump and The Government.

              At this point he's openly stealing from US taxpayers, by paying billions to his Bored of Peace which goes directly into his accounts (probably offshore for tax evasion, which is defacto legal for rich people in the US, Leona Helmsley was right 35 years ago).

          • Despite all of his bluster about "winning", Trump knows that many of his actions are losing propositions - they're just designed to make his opponents feel more pain than he does. Because making America great again has nothing to do with working together to outpace foreign economic and military threats - it's about a narcissist using tax-funded resources to settle personal beefs with domestic rivals at the cost of American advancement.
      • And for all my AI skepticism, I'm growing to like Anthropic.

        Don't be too quick to judge. It isn't a fight of good versus evil, it is a fight for control of the product and the revenue stream.

    • Kegseth isn't going to invoke the DPA. They seem to get away with crazy shit that just hurts people, but they're trying not to spook the market, and going old-school socialist in peacetime is a bit much.

      saw this too:

      It was not immediately clear how the Pentagon intends to label Anthropic a supply chain risk — which typically requires the government and its contractors to cut ties with that company — while simultaneously invoking the Defense Production Act to compel the company to cooperate with the Pentagon.

      https://www.politico.com/news/... [politico.com]

      • by ClickOnThis ( 137803 ) on Friday February 27, 2026 @01:18AM (#66013042) Journal

        It was not immediately clear how the Pentagon intends to label Anthropic a supply chain risk — which typically requires the government and its contractors to cut ties with that company — while simultaneously invoking the Defense Production Act to compel the company to cooperate with the Pentagon.

        Anthropic CEO Amodei's response to this was priceless. From TFA:

        Amodei said Thursday that “those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”

        Christ on a bun. And it gets worse:

        Emil Michael, [DoD] undersecretary for research and engineering, later lashed out at the Anthropic CEO, alleging on X that Amodei “has a God-complex” and “wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.”

        Whoa Emil, stop clutching your pearls, you'll choke yourself. Amodei is trying to uphold the policies of his company regarding the use of their product, and you think that means he's " try[ing] to personally control the US Military and is ok putting our nation’s safety at risk"? Dude, this is a democracy. You have access to many military contractors. Perhaps Anthropic is not the one for you -- excuse me, not the one for what you think the military needs to do.

        • Emil Michael, [DoD] undersecretary for research and engineering, later lashed out at the Anthropic CEO, alleging on X that Amodei âoehas a God-complexâ and âoewants nothing more than to try to personally control the US Military and is ok putting our nationâ(TM)s safety at risk.â

          Emil, stop clutching your pearls, you'll choke yourself. Amodei is trying to uphold the policies of his company regarding the use of their product, and you think that means he's " try[ing] to personally control the US Military and is ok putting our nationâ(TM)s safety at risk"?

          It's simple, their every accusation is a confession. "Not me, you."

    • So good on Anthorpic.

      there is this too:

      Anthropic ditches its core safety promise in the middle of an AI red line fight with the Pentagon

      https://www.cnn.com/2026/02/25... [cnn.com]

      • Interesting. The article headline doesn't really reflect the content. Paramount may be poised to buy Warner (and CNN along with it) but it hasn't happened yet, has it?

    • Kegseth isn't going to invoke the DPA. They seem to get away with crazy shit that just hurts people, but they're trying not to spook the market, and going old-school socialist in peacetime is a bit much.

      It's not just hard-core socialists who want to control the means of production. Fascists want to do the same thing, but for a very different reason: to implement and support a permanent, heightened military posture for the nation. What view would fit in this case?

    • by znrt ( 2424692 )

      So good on Anthorpic.

      on the surface, yes. but that might just end up meaning anthropic would not be collaborating ... publicly. after all, they have already collaborated in illegal operations, maybe not willingly but even then with gross naivety: you don't just claim to be a safeguard and then partner with the pentagon and palantir assuming they're white knights.

      ofc after these outlandish threats have been made it would be now difficult to maintain a good reputation if these don't materialize. nobody could trust anthropic eithe

  • Flip flop (Score:1, Troll)

    by liqu1d ( 4349325 )
    As long as they stay in the news they're happy with whatever it seems. Although I back their latest decision for however long it lasts.
    • Re:Flip flop (Score:5, Informative)

      by swillden ( 191260 ) <shawn-ds@willden.org> on Friday February 27, 2026 @02:26AM (#66013088) Journal

      As long as they stay in the news they're happy with whatever it seems. Although I back their latest decision for however long it lasts.

      They didn't flip flop. They changed their position on one aspect of AI security, while holding the line on a different aspect. It's like if you decided that you were willing to leave your car doors unlocked, but refused to leave your house unlocked. Different things, different risk calculations.

      • Yeah I just reread that article and now realise I screwed up. I thought them dropping their safeguards was to allow them to join the program. My apologies I now see it's unrelated.
        • Yeah I just reread that article and now realise I screwed up. I thought them dropping their safeguards was to allow them to join the program. My apologies I now see it's unrelated.

          I think you were far from alone in that. The comments on the article about Anthropic relaxing their safety commitment showed a lot of people thought it was DoD-related.

  • by Anonymous Coward

    If AI is so good with words and so smart, why can't it convince Hegseth not to spy on Americans?

  • by Nicholas Grayhame ( 10502767 ) on Friday February 27, 2026 @01:22AM (#66013044)

    Anthropic's policies prevent its models, such as its chatbot Claude, from being used for those purposes. It's the last of its peers -- the Pentagon also has contracts with Google, OpenAI and Elon Musk's xAI -- to not supply its technology to a new U.S. military internal network.

    ...Claude is the only one they have a contract to use in classified situations.

    https://www.thebulwark.com/p/a... [thebulwark.com]

  • Holy crap! (Score:4, Interesting)

    by jenningsthecat ( 1525947 ) on Friday February 27, 2026 @02:18AM (#66013082)

    A Big Tech company - and an AI one at that - is taking a principled stance that both loses them income and makes an enemy of the government and the military. Colour me shocked.

    I understand that they're a public benefit company - but they could flip on that in a heartbeat, and haven't. The jaded skeptic in me keeps looking for the angle here, but my inner child-like optimist is happy just to take Anthropic's actions at face value.

    • by cowdung ( 702933 )

      Frankly, this all looks a bit like grandstanding on the part of the US government. Or maybe just general cluelessness.

      The US government is more than capable, and has plenty of funding, to just set up a team that grabs an open source model like .. hmm.. say DeepSeek R2. And gets it to do what they want with a few agents. They have the money, the data centers (or they can rent those), and the technical skill to do all they want.

      Not sure why the haven't.

      I'm not a big believer in the "dangers of AI".. except of

  • by PoopMelon ( 10494390 ) on Friday February 27, 2026 @03:13AM (#66013132)
    If they "have ho interest" on using the ai for autonomous war and mass surveilence then why are they refusing it to put in the contract
  • but that new contract language received from the Defense Department "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons."

    Also . . .

    The Pentagon's top spokesman has reiterated that the military wants to use Anthropic's artificial intelligence technology in legal ways and will not let the company dictate any limits

    If the DoD wants to use Anthropic AI in legal ways there shouldn't be a reason for them to agree to the guardrail preventing ma

  • https://en.wikipedia.org/wiki/... [wikipedia.org]

    Let's translate into political philosophy, for fun and profit.

    The base of the pyramid is basic physiological needs. In terms of statecraft and economics...don't mess with the food supply and refrain from messing with water and electricity.

    Right above that is physical safety. That means a lot of things, but what it does *not* mean is defunding the police or intentionally hobbling your military.

    Way up in the stratosphere where only ivory towers can reach is "self actualization

  • Wake me up in two weeks when they've done an about-face because the investors like money and demanded it.
  • If the Pentagon flexes on Anthropic by blackballing them it could put the entire economy into a tailspin. So many companies rely on them (yes, too big to fail - in terms of AI might). We need TACO to the rescue or we're all screwed.

  • Anybody ever reversed shell with xwindows? with ssh? ever opened multiple sessions on a subnet like that? a web service is literally a platform to mass surveil EVERYONE who uses it. Corporations make sure features that accomplish this stuff is in everything, people know this yet when a CEO says "we don't do that" people just say "ok"? Then go as far as to fully support that as experts and tell the average person "it's ok"...? my opinion.
  • They already did! Exclusive: Anthropic Drops Flagship Safety Pledge https://time.com/7380854/exclu... [time.com]
  • It's almost that since corporations are people, then much like demanding that they decorate a gay wedding cake, you shouldn't be able to demand they participate in something they find abhorrent.

  • They know just how hallucinatory the chatbots are, and likely to kill their own people.

    I mean, Kegsbreath thinks no US military ever retreats for any reason... ( see "Waist Deep in the Big Muddy" https://www.youtube.com/watch [youtube.com]?... [youtube.com]

  • Tail wagging the dog. Some company thinks it can dictate US defense policy.

If you had better tools, you could more effectively demonstrate your total incompetence.

Working...