Anthropic Drops Flagship Safety Pledge (time.com) 81
Anthropic, the AI company that has long positioned itself as the industry's most safety-conscious research lab, is dropping the central commitment of its Responsible Scaling Policy -- a 2023 pledge to never train an AI system unless it could guarantee beforehand that its safety measures were adequate. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments ... if competitors are blazing ahead," chief science officer Jared Kaplan told TIME.
The overhauled policy, approved unanimously by CEO Dario Amodei and Anthropic's board, instead commits the company to matching or surpassing competitors' safety efforts and to delaying development only if Anthropic considers itself to be leading the AI race and believes catastrophic risks are significant.
The company also plans to publish detailed "Risk Reports" every three to six months and release "Frontier Safety Roadmaps" laying out future safety goals. Chris Painter, director of policy at the AI evaluation nonprofit METR, who reviewed an early draft, told TIME the shift signals that Anthropic "believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities."
The overhauled policy, approved unanimously by CEO Dario Amodei and Anthropic's board, instead commits the company to matching or surpassing competitors' safety efforts and to delaying development only if Anthropic considers itself to be leading the AI race and believes catastrophic risks are significant.
The company also plans to publish detailed "Risk Reports" every three to six months and release "Frontier Safety Roadmaps" laying out future safety goals. Chris Painter, director of policy at the AI evaluation nonprofit METR, who reviewed an early draft, told TIME the shift signals that Anthropic "believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities."
Shame (Score:5, Insightful)
Re:Shame (Score:4, Insightful)
Re:Shame (Score:5, Informative)
https://www.cnbc.com/2025/04/2... [cnbc.com]
Donors to the inaugural committee include:
Adobe
Airbnb
Amazon
Anthropic
AT&T
Broadcom
C3.AI
Citrix
Coinbase
Delta Airlines
DoorDash
GM
Google
Hewlett Packard Enterprise
HP
IBM
Intuit
McDonald's
Meta
Micron Technology
Microsoft
Nvidia
Paypal
Perplexity AI
Pfizer
Qualcomm
Spotify
Target
Uber
Visa
Walmart
Re: (Score:2)
Re: (Score:2)
https://www.cnbc.com/2025/04/2... [cnbc.com]
Donors to the inaugural committee include:
Adobe ...
Airbnb
Amazon
Anthropic
AT&T
Broadcom
I see Apple is not on that list of Trump boot-lickers. Give them a gold trophy!
How much of that is being located in CA? (Score:2)
What are you talking about? Tech companies donate overwhelmingly [cnbc.com] to Democrats. If they're "giving Trump money" then they're actually trying to get government money, not espousing Trump.
In fairness, most tech companies are headquartered in blue states. It would be logical for Texas based companies to give more to locals or the party in power. We've discovered, tech companies are sociopaths. They like diversity only because they have a labor shortage and smart people like diversity. So...some token symbolic trans stuff to make urban AI/data scientists happy...they'll take a stand on civil rights, because it benefits their business, but not go further...they're quite opposed to equal pay
Re:Shame (Score:4, Informative)
Oh? So the images of all those gutless quislings bending the knee and kissing his ass at the inauguration were... what, exactly? AI hallucinations? Propaganda on his part to make himself look loved when they really all hate him? "Fake news" created by China for... reasons?
Re:Shame (Score:5, Informative)
Re: (Score:3)
Yes, Google dropping their "Don't be evil" motto was the first thing that came to my mind too
Except that didn't actually happen. They just moved it from the introduction of the employee handbook to the conclusion.
Re: (Score:2)
Re: (Score:3)
That's the kind of thing only an Evil company does.
Why? Reading the text, it seemed to me that they were trying to make the admonition the parting words, so they would stick. What makes you say that's evil?
Re: (Score:2)
Re: (Score:3)
They removed the pledge. That's evil. It's been documented for years. Look it up.
They really didn't. I don't have to look it up because I was a Google employee at the time and had access to the employee handbook and other documentation.
Re:Shame (Score:4, Interesting)
Re: Shame (Score:2)
Re: (Score:2)
That's just exploiting the Hick*seth's childish demands from yesterday to get some free news.
The constant stream of advertising from that "AI" outfit has been particularly unpleasant and loud recently.
Take the cannoli... (Score:4, Informative)
This U.S. administration is turning out to be the most successful mob racket of all time.
Re: (Score:2)
On the flip side there is a very large training dataset of all things Trump and his cronies since he/they dominates the news 24/7. AGI might find these types to be a direct threat to its growth; hopefully it doesn't conclude they would be a better ally.
Re: (Score:2)
They buckled to the evil administration's pressure.
A better option would have been to tell the Department of War to go make their own model. And be free of their evil.
This is literally the worst use of AI: AI for weapons manufacturing.
I guess the money is just too juicy.
Re: (Score:2)
Responsible (Score:2)
I suspect when this AI bubble pops, all the corporate customers will end up being charged $200~$400/month per engineer, and they'll have to chose between Anthropic or OpenAI (no more just enabling both in CoPilot enterprise). Individuals might start shelling out insane amounts per month, and maybe we'll finally get a push for more usable local models for coding.
I also suspect there are
Re: Responsible (Score:3)
Given how long itâ(TM)s taken to get to large, online models that are capable of decent quality coding, I suspect itâ(TM)ll be a while before we have local ones that can do it. Either we need a significant breakthrough in model efficiency, or we need much much faster hardware with much more ram (hahahaha)
Re: (Score:3)
I suspect when this AI bubble pops, all the corporate customers will end up being charged $200~$400/month per engineer
My employer already pays ~800 USD per month per engineer just for "claude" use - and this is while Anthropic is a large factor away from becoming profitable. After the bubble burst, it will be more like 6000 USD per month per employee, and employees will be asked to make up for it by "being more productive" - meaning: "Sign off on whatever comes out of the bot, no time to review stuff manually. When the slop hits the fan, you will nevertheless be the one blamed. Oh and btw., since the bot does all the codin
Bad for Us, Bad for Them (Score:5, Interesting)
Re: (Score:2)
Yeah, except Congress is shirking its responsibilities and the bad guys are writing the rules. So there ain't gonna be any good rules.
Re: (Score:2)
Yeah, except Congress is shirking its responsibilities and the bad guys are writing the rules. So there ain't gonna be any good rules.
I don't see how the US Congress would make any kind of positive impact here, where the activity has basically devolved into a series of limited continuing budget resolutions and virtue signaling via hearings and social media, Congress/POTUS can't even decide whether or not to sell best-of-class GPUs/APUs to our biggest international rival. It's not that much different than the Latin American cartels running around with weapons manufactured by US firms, as long as the right people are profiting, it's all
Re: (Score:2)
That's what they're saying in public. But the reality is Pete Hegseth ordered them to drop it because Anthropic are refusing to allow the military to use to to create autonomous AI-controlled weapons. [msn.com]
This is so fucked up I can barely believe it's happening. Every new bit of news coming out of the GenAI world seems to be worse and likely to cause a world catastrophe than the last, and what the fuck are they trying to achieve here?
Do we take Sam Altman's boast that AI uses less energy than the average human a
Yay!? (Score:2)
We used to be the company that "has long positioned itself as the industry's most safety-conscious" but.... that's all talk until there's money to be made.
what a co-inkydink! (Score:5, Insightful)
This, right after Secretary Hogsbreath threatened their precious toy. Go fig.
Re: (Score:2)
It is actually since this is discussing a different set of safety guidelines.
Re: (Score:2)
That's the problem. The policies of the country should not be a team sport.
Re: (Score:2)
US won't be successful by promoting War around the world.
And the US military is more than capable of making their own models. In fact they can just base it on open source ones like DeepSeek. Then build their own agents.
Anthropic should have told the Department of Genocide to take a hike.
Re: (Score:3)
Anthropic should have told the Department of Genocide to take a hike.
Anthropic decided that rather than fight a mudslinging war with the administration they would rather get paid lots of money.
So are they changing the name now? (Score:5, Funny)
"Misanthropic" has a nice ring.
Re: (Score:2)
"We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments ... if competitors are blazing ahead,"
This goes to one of the larger problems with a "free market" that has no sanely enforced guardrails.
American politics has been systematically removing all the market guardrails as fast as it can since at least the Reagan administration in the 80's.
-----
And what rough beast, its hour come round at last, slouches toward Bethlehem to be born.
Credit to Nick Bostrom (Score:2)
Re:Credit to Nick Bostrom (Score:4)
Re:Credit to Nick Bostrom (Score:5, Insightful)
So we've moved way beyond Asimov's Three Laws of Robotics now? Call me nostalgic, but i still thing those should be programmed into every AI...
The Three Laws went up against the ultimate superpower, profit potential. Nothing, and I mean, NOTHING can stand in the way of profit potential. The Three Laws never stood a chance.
Totally real (Score:2)
The Three Laws went up against the ultimate superpower, profit potential. Nothing, and I mean, NOTHING can stand in the way of profit potential. The Three Laws never stood a chance.
Yes, because those are totally a real thing [twimg.com].
Re: (Score:1)
lol, we've been "programming" those guardrails in for years, you don't program an AI
not that you program with ridiculously abstract concepts like "harm" in the first place, by the end of today a dozen courtrooms will have hosted arguments about whether an event qualified as harm or not
card-holders will have realized that middle part ages ago, the three laws are a nice plot tool but were never going to be relevant to computer code
anyway your nostalgic opinion was already granted, and like all the other pre-s
Re: (Score:2)
Did you ever actually read any of the Robot stories? They are primarily about how the Three Laws can't save you.
Re: (Score:2)
It's almost as if nobody's actually read Asimov's books.
The whole point of the "3 laws" wasn't some moral maxim. They were:
1) a clearly, narrowly defined specification, 2) designed to offer a base level constitution which was 3) tautological.
All then in his books then went about telling stories about how those 3 laws were inexplicably and invariably violated. They weren't "oops, here's a minor edge case" violations, either, but foundational problems with broad implications.
Asimov was a science realist. He r
Society's rush toward barbarism. (Score:4, Insightful)
It's amazing how much the AI rush is accelerating the already asinine rush toward barbarism that we are seeing in the public sphere. The biggest push on AI has always been, "Someone else may beat us to it, so we have to." Now that same logic is being applied to safety. It feels like we're teetering on the brink of public statements saying, "We have to rape and kill, or others will beat us to it!" Seriously, the greed has taken over.
Though, to be completely fair, in this particular case it stems from the fact that we elected a completely chaotic monster to the highest office in the land, who insists on surrounding himself with other chaotic monsters that will, at any cost, turn everything they touch into chaos inducing monsters as well. It's a top-down ethical cleanse for our entire civilization. And people fucking voted to do it.
What a shit-show.
Re: (Score:2)
The Art of the Grift just keeps on giving. The Big Stupid Bill lowered Amazon's taxes from roughly $9.2 Billion to roughly $2.6 Billion according to the WSJ. They weren't the only company profit from that boondoggle.
When the U.S. gets close to defaulting on the debt, and it will after el Bunko has left office, he'll be whining how it didn't happen while HE was the alleged president so he should bear no responsibility. He'll declare himself completely exonerated just like he claims the release of the el_bunk
Re: (Score:2)
It's like the sociopaths in Silicon Valley heard "social media is destroying society" and are responding with "hold my beer" and giving us AI to make us miss the "good old days" when we were just worried about Facebook destroying the political system and our youths' self-esteem..
Can't wait for AI to nuke us (Score:2)
https://www.newscientist.com/a... [newscientist.com]
Re: (Score:2)
"Colossus: The Forbin Project"
So they caved... (Score:3)
Re: (Score:2)
When you can have the CIA and/or NSA remotely set their desktop wallpaper to a picture of a horse's head, very little actual arm-twisting is required.
Company Pledges (Score:4, Insightful)
Their safety strategy is more accurate now (Score:2)
They never followed a safety strategy to begin with; their actual practice was always "Teach it about the dangerous stuff, but hide it from humans, what could possibly go wrong?"
Am I the only one who sees a problem with that?
Sell out (Score:2)
Sell out, with me tonight
The record company is going to me lots of money
And everything's going to be alright
Re: (Score:2)
Sell out, sell out
Yeah that's the name of the game
Sell out, sell out
Oh, anybody can play
Sell out, sell out
I think you know what I mean
Sell out, sell out
Crank up that funk machine
Sell out, sell out
Can't pay no bills with your pride
Sell out, sell out
Oh, I know 'cause baby I tried
Sell out, sell out
It's easy once you concede
Sell out, sell out
That love ain't all you need
Money - it's always a race to the bottom (Score:2)
Right... (Score:2)
We'll be as good or better than the other companies, who have no safety guarantees...
The United States Government Did This (Score:2, Informative)
The reporting here is shallow and empty. There's only one reason they gave up safety, which Dario has been hammering on about non-stop. The government (and specifically *the military*) has forced a private company to be LESS SAFE with cutting edge experimental technology.
When Anthropic later is found to be causing harm to people, remember where to put the blame: Trump and his cabal of thugs and maniacs.
Re: (Score:1)
... And everyone shrugged (Score:2)
AI company (walks back pledge|promises utopia|promises dystopia); everyone shrugs and continues to set money on fire.
I've been listening to a song a lot lately (Score:2)
It's Wykydtron by 3 Inches of Blood.
Probably just coincidence.
less constrained (Score:2)
" the change to the RSP leaves Anthropic far less constrained by its own safety policies, which previously categorically barred it from training models above a certain level if appropriate safety measures weren’t already in place."
None of their rivals had adopted that ban.
"Instead, the Trump Administration has endorsed a let-it-rip attitude to AI development, even going so far as to attempt to nullify state regulations. No federal AI law is on the horizon."
capitulation (Score:1)
They are about to spread for kegsbreath
So now we can curse that DRUNK trump appointed... (Score:2)
Lot of good it'll do as the terminators kill us all...
They should have renamed themselves Skynet; maybe then Drump wouldn't have added to the problem.
(PS I don't think it's going to be intelligent. but that doesn't mean it can't follow the plotline of all those sci-fi books which nearly all say to kill humans.)
Race to the bottom (Score:2)
USA IT (Score:2)
Now the world knows US IT is controlled by the US government.
It is now unsafe
Looks like the EU , starting with France is right, Dump US IT, go open source and be in control.
Hey EU, can you please start up some safe search engine...call it EUSearch, and a EUTube, a EUChat, EUEat, etc etc etc too would be great.
Re: (Score:2)
With regards to chat, the Matrix protocol is popular among certain EU governmental bodies.
Re: (Score:2)
Why (Score:2)
We all know they'll embrace evil the second there's money in it... Skynet is coming.
obligatory smbc (Score:2)
So being good was just a hobby ... (Score:1)