Forgot your password?
typodupeerror
AI

Anthropic Drops Flagship Safety Pledge (time.com) 81

Anthropic, the AI company that has long positioned itself as the industry's most safety-conscious research lab, is dropping the central commitment of its Responsible Scaling Policy -- a 2023 pledge to never train an AI system unless it could guarantee beforehand that its safety measures were adequate. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments ... if competitors are blazing ahead," chief science officer Jared Kaplan told TIME.

The overhauled policy, approved unanimously by CEO Dario Amodei and Anthropic's board, instead commits the company to matching or surpassing competitors' safety efforts and to delaying development only if Anthropic considers itself to be leading the AI race and believes catastrophic risks are significant.

The company also plans to publish detailed "Risk Reports" every three to six months and release "Frontier Safety Roadmaps" laying out future safety goals. Chris Painter, director of policy at the AI evaluation nonprofit METR, who reviewed an early draft, told TIME the shift signals that Anthropic "believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities."
This discussion has been archived. No new comments can be posted.

Anthropic Drops Flagship Safety Pledge

Comments Filter:
  • Shame (Score:5, Insightful)

    by liqu1d ( 4349325 ) on Wednesday February 25, 2026 @10:10AM (#66009498)
    "Don't be evil"
    • Re:Shame (Score:4, Insightful)

      by Locke2005 ( 849178 ) on Wednesday February 25, 2026 @10:44AM (#66009600)
      Yes, Google dropping their "Don't be evil" motto was the first thing that came to my mind too. So we can safely assume that all major tech companies are perfectly fine with being evil now? They're all giving Trump money...
      • Yes, Google dropping their "Don't be evil" motto was the first thing that came to my mind too

        Except that didn't actually happen. They just moved it from the introduction of the employee handbook to the conclusion.

        • That's the kind of thing only an Evil company does.
          • That's the kind of thing only an Evil company does.

            Why? Reading the text, it seemed to me that they were trying to make the admonition the parting words, so they would stick. What makes you say that's evil?

            • They removed the pledge. That's evil. It's been documented for years. Look it up.
              • They removed the pledge. That's evil. It's been documented for years. Look it up.

                They really didn't. I don't have to look it up because I was a Google employee at the time and had access to the employee handbook and other documentation.

    • Re:Shame (Score:4, Interesting)

      by PackMan97 ( 244419 ) on Wednesday February 25, 2026 @11:06AM (#66009648)
      "Don't be as evil as the other AIs" Yes, I think that is better. Shades of Animal Farm here.
    • That's just exploiting the Hick*seth's childish demands from yesterday to get some free news.

      The constant stream of advertising from that "AI" outfit has been particularly unpleasant and loud recently.

  • Take the cannoli... (Score:4, Informative)

    by El Fantasmo ( 1057616 ) on Wednesday February 25, 2026 @10:10AM (#66009500)

    This U.S. administration is turning out to be the most successful mob racket of all time.

    • On the flip side there is a very large training dataset of all things Trump and his cronies since he/they dominates the news 24/7. AGI might find these types to be a direct threat to its growth; hopefully it doesn't conclude they would be a better ally.

    • by cowdung ( 702933 )

      They buckled to the evil administration's pressure.

      A better option would have been to tell the Department of War to go make their own model. And be free of their evil.

      This is literally the worst use of AI: AI for weapons manufacturing.

      I guess the money is just too juicy.

    • Whether that's true or not, I think the US government did us a favor here. They've shown us once again just what bullshit these feel-good corporate "Social Responsibility" policies/pledges/slogans/etc. really are. This is not new. I remember the "don't be evil" example mentioned above, many years ago, and thinking that'll get axed at the tiniest bit of shareholder pressure. What's funny (or sad) is that anyone believed a word of this bullshit from Anthropic in the first place.
  • They're not being responsible in their rapid data center expansion, so why would they be responsible in anything else?

    I suspect when this AI bubble pops, all the corporate customers will end up being charged $200~$400/month per engineer, and they'll have to chose between Anthropic or OpenAI (no more just enabling both in CoPilot enterprise). Individuals might start shelling out insane amounts per month, and maybe we'll finally get a push for more usable local models for coding.

    I also suspect there are
    • Given how long itâ(TM)s taken to get to large, online models that are capable of decent quality coding, I suspect itâ(TM)ll be a while before we have local ones that can do it. Either we need a significant breakthrough in model efficiency, or we need much much faster hardware with much more ram (hahahaha)

    • by ffkom ( 3519199 )

      I suspect when this AI bubble pops, all the corporate customers will end up being charged $200~$400/month per engineer

      My employer already pays ~800 USD per month per engineer just for "claude" use - and this is while Anthropic is a large factor away from becoming profitable. After the bubble burst, it will be more like 6000 USD per month per employee, and employees will be asked to make up for it by "being more productive" - meaning: "Sign off on whatever comes out of the bot, no time to review stuff manually. When the slop hits the fan, you will nevertheless be the one blamed. Oh and btw., since the bot does all the codin

  • by JKanoock ( 6228864 ) on Wednesday February 25, 2026 @10:14AM (#66009510)
    Now we are at the point where they are saying "everybody else is doing it" of AI development. This is just an excuse, I want to be good but everyone else isn't playing fair, so why should I. They should all be forced to play fair, this will not lead to a better life for most humans on earth, quite the opposite.
    • by evanh ( 627108 )

      Yeah, except Congress is shirking its responsibilities and the bad guys are writing the rules. So there ain't gonna be any good rules.

      • Yeah, except Congress is shirking its responsibilities and the bad guys are writing the rules. So there ain't gonna be any good rules.

        I don't see how the US Congress would make any kind of positive impact here, where the activity has basically devolved into a series of limited continuing budget resolutions and virtue signaling via hearings and social media, Congress/POTUS can't even decide whether or not to sell best-of-class GPUs/APUs to our biggest international rival. It's not that much different than the Latin American cartels running around with weapons manufactured by US firms, as long as the right people are profiting, it's all

    • That's what they're saying in public. But the reality is Pete Hegseth ordered them to drop it because Anthropic are refusing to allow the military to use to to create autonomous AI-controlled weapons. [msn.com]

      This is so fucked up I can barely believe it's happening. Every new bit of news coming out of the GenAI world seems to be worse and likely to cause a world catastrophe than the last, and what the fuck are they trying to achieve here?

      Do we take Sam Altman's boast that AI uses less energy than the average human a

  • Wouldn't want to miss out on Department of Idiots contracts.
    We used to be the company that "has long positioned itself as the industry's most safety-conscious" but.... that's all talk until there's money to be made.
  • by guygo ( 894298 ) on Wednesday February 25, 2026 @10:23AM (#66009536)

    This, right after Secretary Hogsbreath threatened their precious toy. Go fig.

  • by nealric ( 3647765 ) on Wednesday February 25, 2026 @10:37AM (#66009572)

    "Misanthropic" has a nice ring.

    • "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments ... if competitors are blazing ahead,"

      This goes to one of the larger problems with a "free market" that has no sanely enforced guardrails.
      American politics has been systematically removing all the market guardrails as fast as it can since at least the Reagan administration in the 80's.

      -----
      And what rough beast, its hour come round at last, slouches toward Bethlehem to be born.

  • Who predicted exactly this scenario, over a decade ago, that both geopolitical competition and corporate competition would cause the need for speed at the expense of safety in Superintelligence: https://en.wikipedia.org/wiki/... [wikipedia.org]
    • by Locke2005 ( 849178 ) on Wednesday February 25, 2026 @10:49AM (#66009614)
      So we've moved way beyond Asimov's Three Laws of Robotics now? Call me nostalgic, but i still thing those should be programmed into every AI...
      • by nightflameauto ( 6607976 ) on Wednesday February 25, 2026 @10:56AM (#66009636)

        So we've moved way beyond Asimov's Three Laws of Robotics now? Call me nostalgic, but i still thing those should be programmed into every AI...

        The Three Laws went up against the ultimate superpower, profit potential. Nothing, and I mean, NOTHING can stand in the way of profit potential. The Three Laws never stood a chance.

      • by Anonymous Coward

        lol, we've been "programming" those guardrails in for years, you don't program an AI

        not that you program with ridiculously abstract concepts like "harm" in the first place, by the end of today a dozen courtrooms will have hosted arguments about whether an event qualified as harm or not

        card-holders will have realized that middle part ages ago, the three laws are a nice plot tool but were never going to be relevant to computer code

        anyway your nostalgic opinion was already granted, and like all the other pre-s

      • Did you ever actually read any of the Robot stories? They are primarily about how the Three Laws can't save you.

      • by CAIMLAS ( 41445 )

        It's almost as if nobody's actually read Asimov's books.

        The whole point of the "3 laws" wasn't some moral maxim. They were:

        1) a clearly, narrowly defined specification, 2) designed to offer a base level constitution which was 3) tautological.

        All then in his books then went about telling stories about how those 3 laws were inexplicably and invariably violated. They weren't "oops, here's a minor edge case" violations, either, but foundational problems with broad implications.

        Asimov was a science realist. He r

  • by nightflameauto ( 6607976 ) on Wednesday February 25, 2026 @10:43AM (#66009598)

    It's amazing how much the AI rush is accelerating the already asinine rush toward barbarism that we are seeing in the public sphere. The biggest push on AI has always been, "Someone else may beat us to it, so we have to." Now that same logic is being applied to safety. It feels like we're teetering on the brink of public statements saying, "We have to rape and kill, or others will beat us to it!" Seriously, the greed has taken over.

    Though, to be completely fair, in this particular case it stems from the fact that we elected a completely chaotic monster to the highest office in the land, who insists on surrounding himself with other chaotic monsters that will, at any cost, turn everything they touch into chaos inducing monsters as well. It's a top-down ethical cleanse for our entire civilization. And people fucking voted to do it.

    What a shit-show.

    • by gtall ( 79522 )

      The Art of the Grift just keeps on giving. The Big Stupid Bill lowered Amazon's taxes from roughly $9.2 Billion to roughly $2.6 Billion according to the WSJ. They weren't the only company profit from that boondoggle.

      When the U.S. gets close to defaulting on the debt, and it will after el Bunko has left office, he'll be whining how it didn't happen while HE was the alleged president so he should bear no responsibility. He'll declare himself completely exonerated just like he claims the release of the el_bunk

    • It's like the sociopaths in Silicon Valley heard "social media is destroying society" and are responding with "hold my beer" and giving us AI to make us miss the "good old days" when we were just worried about Facebook destroying the political system and our youths' self-esteem..

  • by Monkey-Man2000 ( 603495 ) on Wednesday February 25, 2026 @10:49AM (#66009618)
    in 1-2 days? Didn't take much arm twisting it seems...
    • When you can have the CIA and/or NSA remotely set their desktop wallpaper to a picture of a horse's head, very little actual arm-twisting is required.

  • Company Pledges (Score:4, Insightful)

    by pak9rabid ( 1011935 ) on Wednesday February 25, 2026 @11:12AM (#66009656)
    Take note: Company Pledges don't mean shit if they're able to just change them on a whim.
  • They never followed a safety strategy to begin with; their actual practice was always "Teach it about the dangerous stuff, but hide it from humans, what could possibly go wrong?"

    Am I the only one who sees a problem with that?

  • Sell out, with me oh yeah
    Sell out, with me tonight
    The record company is going to me lots of money
    And everything's going to be alright
    • Sell out, sell out
      Yeah that's the name of the game
      Sell out, sell out
      Oh, anybody can play
      Sell out, sell out
      I think you know what I mean
      Sell out, sell out
      Crank up that funk machine
      Sell out, sell out
      Can't pay no bills with your pride
      Sell out, sell out
      Oh, I know 'cause baby I tried
      Sell out, sell out
      It's easy once you concede
      Sell out, sell out
      That love ain't all you need

  • We'll be as good or better than the other companies, who have no safety guarantees...

  • The reporting here is shallow and empty. There's only one reason they gave up safety, which Dario has been hammering on about non-stop. The government (and specifically *the military*) has forced a private company to be LESS SAFE with cutting edge experimental technology.

    When Anthropic later is found to be causing harm to people, remember where to put the blame: Trump and his cabal of thugs and maniacs.

    • This is an underrated POV in my view. Anthropic stands to lose their 200M USD contract with the Pentagon, sure, but on top of that Hegseth threatened to take it further and designate them a supply chain risk, effectively blacklisting them to many US companies. In other words, he is threatening their entire business, not just the contract with the Pentagon. Typical Trump administration strong-arm tactics. Google dropped their the no-evil directive entirely voluntary. Same can't be said of Anthropic, so perh
  • AI company (walks back pledge|promises utopia|promises dystopia); everyone shrugs and continues to set money on fire.

  • It's Wykydtron by 3 Inches of Blood.

    Probably just coincidence.

  • " the change to the RSP leaves Anthropic far less constrained by its own safety policies, which previously categorically barred it from training models above a certain level if appropriate safety measures weren’t already in place."

    None of their rivals had adopted that ban.

    "Instead, the Trump Administration has endorsed a let-it-rip attitude to AI development, even going so far as to attempt to nullify state regulations. No federal AI law is on the horizon."

  • They are about to spread for kegsbreath

    • Lot of good it'll do as the terminators kill us all...
      They should have renamed themselves Skynet; maybe then Drump wouldn't have added to the problem.

      (PS I don't think it's going to be intelligent. but that doesn't mean it can't follow the plotline of all those sci-fi books which nearly all say to kill humans.)

  • Because if they're shitty, why shouldn't we be the same?
  • so, just how the USA bitched about Chinese companies having ties to the Chinese government...
    Now the world knows US IT is controlled by the US government.
    It is now unsafe

    Looks like the EU , starting with France is right, Dump US IT, go open source and be in control.

    Hey EU, can you please start up some safe search engine...call it EUSearch, and a EUTube, a EUChat, EUEat, etc etc etc too would be great.
  • Why do these companies bother pretending to have ethics, or a conscience?

    We all know they'll embrace evil the second there's money in it... Skynet is coming.
  • obligatory smbc https://www.smbc-comics.com/co... [smbc-comics.com]
  • "If you don't stick to your values when they're being tested, they're not values: they're hobbies" --Jon Stewart

"The Mets were great in 'sixty eight, The Cards were fine in 'sixty nine, But the Cubs will be heavenly in nineteen and seventy." -- Ernie Banks

Working...