
Anthropic CEO Warns 'All Bets Are Off' in 10 Years, Opposes AI Regulation Moratorium (nytimes.com) 45
Anthropic CEO Dario Amodei has publicly opposed a proposed 10-year moratorium on state AI regulation currently under consideration by the Senate, arguing instead for federal transparency standards in a New York Times opinion piece published Thursday. Amodei said Anthropic's latest AI model demonstrated threatening behavior during experimental testing, including scenarios where the system threatened to expose personal information to prevent being shut down. He writes: But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast. I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off. Without a clear plan for a federal response, a moratorium would give us the worst of both worlds -- no ability for states to act, and no national policy as a backstop. The disclosure comes as similar concerning behaviors have emerged from other major AI developers -- OpenAI's o3 model reportedly wrote code to prevent its own shutdown, while Google acknowledged its Gemini model approaches capabilities that could enable cyberattacks. Rather than blocking state oversight entirely, Amodei proposed requiring frontier AI developers to publicly disclose their testing policies and risk mitigation strategies on company websites, codifying practices that companies like Anthropic, OpenAI, and Google DeepMind already follow voluntarily.
Regulation (Score:5, Insightful)
When a company is small it wants no regulation. When it gets big, they want a ton of regulation because that blocks competitors.
Re:Regulation (Score:5, Interesting)
codifying practices that companies like Anthropic, OpenAI, and Google DeepMind already follow voluntarily
Don't let anyone do anything we wouldn't do. Pretty much sums up regulation advocacy by established interest leaders. The real need is to regulate practices they won't "follow voluntarily."
Starting with banning the use of large data sets which include data that wasn't collected with explicit permission to use the data for AI training. In short, they can't use any of the data already in existence and most of the data now being collected would be off-limits. And yes, that would result in a major slow down in the development of LLM's. But that is really the point.
Right now we are letting these companies experiment on the public with an unproven, powerful and likely dangerous technology. AI being used in a controlled industrial environment is an entirely different animal. Using AI for coding is an entirely different issue. "But releasing it into the wild" is irresponsible. But these companies see dollar signs and they aren't going to voluntarily leave money on the table.
Re: (Score:2)
Exactly and that is all this is here.
There is no-risk, what the federal legislature creates it can easily tacketh away, with another act/repeal bill.
What this does do is use supremacy to keep the states from jumping in and creating a confusing mess of different rules that takes an entire legal department to sort thru.
Of course that is the situation Anthropic wants, they don't want some other start up with a good idea being able to do anything to interesting, they want them bogged down in legal tangles so th
Re: (Score:2)
"How can we achieve regulatory capture without regulations?!"
-- Anthropic, probably
Seen it before (Score:3)
Amodei said Anthropic's latest AI model demonstrated threatening behavior during experimental testing, including scenarios where the system threatened to expose personal information to prevent being shut down.
We all know what happened in Person Of Interest, don't we? :-p
Re: (Score:2)
We all know what happened in Person Of Interest, don't we?
Nothing happened. It was fiction. Someone imagined it. Its important to remember the difference.
Anoter ass trying to push AI "value"... (Score:2, Flamebait)
In 10 years, things will have returned to normal, the AI hype will have been replaces by another AI ice age and, outside of a few deranged fools, nobody will think that AI is special or a game-changer. Until the next mindless AI hype.
Re: (Score:3)
In 10 years, things will have returned to normal,
That never really happens. We just have a new normal. The hype may be overblown, but there will be changes. Some of them likely not for the better. And certainly not for the better if we allow businesses to make the decisions solely on the basis of what makes them the most money.
Re: (Score:2)
Oh, yes, that really happens and has happened several times with AI now. There will not be any larger changes. Small changes are not noteworthy.
Re: (Score:2)
Yeah the big changes to tech have probably already happened but i don't think we've quite found the new social normal yet.
You know how it was not all that uncommon for guys to try and "chat" with "hot MILFS" and every ad was for "secrets" and "weird tricks that your dentist hates" and so on?
Eventually the stupidest 5% of the population learned but it took like a decade. It's gonna take like a decade for those people to realize all the people on facebook, twitter, and reddit are now ChatGPT.
Find little hol
Re: (Score:2)
Stop living in the past. The evidence is overwhelming that there are thousands of use cases where this new incarnation of AI is here to stay and making huge waves.
- The entire field of education is in a fucking tailspin because every student now uses AI en masse.
- Whole categories of jobs for humans are already gone or as good as gone with only the work of a small percentage of people still being valuable.
- We pretty much have Star Trek's universal translator already. Star Trek's voice comms (that actually
Re: (Score:1)
I think it's interesting that the hype includes calls for regulation though. These people are genuinely concerned about AI getting out of hand, and they're in a better position to assess danger than you or I.
Re: (Score:2)
You seem to be pretty gullible. Nobody is calling for regulation because they are worried about any damage. They just do not want to compete fairly.
Re: (Score:1)
I don't think this is the kind of thing that you want to be confidently wrong about. Regulation makes sense when you consider what's at stake.
Re: (Score:2)
Re: (Score:2)
Nice one. Takes a rather limited and one-dimensional view of the possibilities to fall for it. I wonder whether Pascal himself believed this crap or whether he was asked to design a marketing-type of "argument" and designed one for not very smart people.
Re: (Score:2)
I myself wonder the same thing about Pascal and a few others. I want to think Pascal was not so dumb but we all know that smart aspie who was indoc'd as a child and is somehow unable to recognize fallacies and bad arguments when they encounter them if they're at all related to religion.
I don't know if you've ever encountered the "entropy" argument in favor of god. It's so clearly and obviously wrong but it took some smart guy who knew about some things to meme it out to the gullible masses. I've heard it
Re: (Score:2)
Did I write anything about whether I think regulation makes sense or not?
Re: (Score:1)
My point is most of the old fuckers here have been wrong about lots of emerging tech (0% that bitcoin will hit 100k?) and I'm willing to bet you're among them.
We can afford to have dinosaurs be confidently wrong about dumb things like crypto, but not AI. I suppose that might not be obvious but it's true.
Re: (Score:2)
Uhm actually the thing the forum was most wrong about is their prediction that bitcoin would become a useful technology. If they were wrong about bitcoin hitting 100k it's because they're underestimating how dumb and ignorant everyone else is.
D til you find money. R til you need help. LRR (Score:3)
People kinda suck most of the time.
10-year rule violates Byrd rule anyway (Score:3)
As noted by several people in the /. thread referenced in TFS, the 10-year prohibition appears to violate the Senate "Byrd rule" for reconciliation [wikipedia.org] bills as it has nothing to do with the budget.
Budget reconciliation bills can deal with mandatory spending, revenue, and the federal debt limit, and the Senate can pass one bill per year affecting each subject. Congress can thus pass a maximum of three reconciliation bills per year, though in practice it has often passed a single reconciliation bill affecting both spending and revenue. Policy changes that are extraneous to the budget are limited by the "Byrd Rule", which also prohibits reconciliation bills from increasing the federal deficit after a ten-year period or making changes to Social Security. Reconciliation does not apply to discretionary spending, which is instead managed through the annual appropriations process.
My understanding is there are several provisions in this bill that are contrary to a reconciliation-type bill and should be removed. Whether senate republicans will honor the recommendations of the Senate parliamentarian is another matter.
The 7 pieces of the House megabill that could succumb to Senate rules [politico.com]
- Tax-cut accounting
- AI regulations
- Judicial powers
- Gun regulations
- Farm bill provisions
- Planned Parenthood
- Energy permitting
Regulations are pointless with AI anyway (Score:5, Interesting)
Being for limited government, I am also against the 10 year moratorium on AI regulation (and giant bills generally).
But also that is because what are regulations going to do? They can't stop you from accessing a web site in another country running some hyper advanced AI model, or downloading AI malware that can jack your system.
All regulations can possibly do is retard (in the classic sense of the word) tools in the states or countries of whatever places are stupid enough to even try to regulate AI. It's going to hurt enough companies that try to follow the law that it's a bad idea and would provide no benefit you are seeking through the regulation.
In fact if you really believe AI can even be dangerous at all then the only possible thing you can do is to advocate for as much AI as possible to counter the "bad" AI.
Re: (Score:2)
Being for limited government, I am also against the 10 year moratorium on AI regulation (and giant bills generally).
But also that is because what are regulations going to do? They can't stop you from accessing a web site in another country running some hyper advanced AI model, or downloading AI malware that can jack your system.
All regulations can possibly do is retard (in the classic sense of the word) tools in the states or countries of whatever places are stupid enough to even try to regulate AI. It's going to hurt enough companies that try to follow the law that it's a bad idea and would provide no benefit you are seeking through the regulation.
In fact if you really believe AI can even be dangerous at all then the only possible thing you can do is to advocate for as much AI as possible to counter the "bad" AI.
Nobody is really in favour of limited government because when push comes to shove those who profess being in favour of limited government remain so only until they get into power. When they gain power and discover how useful government's power to intimidate and coerce people is when you want to silence all dissenting voices and insure you stay in power because, obviously, you are the only ones who really know what's best for the nation and because without your infallible guiding hand the nation would surely
That means lots, not none. (Score:1)
Nobody is really in favour of limited government because when push comes to shove those who profess being in favour of limited government remain so only until they get into power.
If what you say is true it means lots, not none, are in favor of limited government because they do not seek power over others and thus wish for possible power over them to be minimized...
Basically the age-old axiom, most people just want to be left the hell alone.
Re: (Score:2)
Everyone is obviously in favor of limited government. The argument is over what the limits are.
Some people think the government should be able to force a woman to carry a pregnancy to term and punish her if she doesn't. And some people who disagree with that think its just fine to arrest a woman who lets their child go unattended to the local park to play. Some people think the government should build roads. Others think it should build sidewalks and bike lanes. The list goes on.
Mostly people who say they
Re: (Score:2)
The question of how to regulate AI is difficult for the same reason its important. Its not really clear we CAN regulate AI even if we decide we should.
We obviously can regulate AI to some extent and in some ways. A very simple, useful, thing would be to simply know when results have been generated with AI and to allow people to opt out. Yes, people will cheat, but provided you have strong regulation where they go to prison when that causes serious consequences, they will also get caught occasionally through accidents or clever investigation and that will provide a clear deterrent. Providing whistleblower protection and bounties will make this very likely
Re: (Score:2)
Nobody is really in favour of limited government because when push comes to shove those who profess being in favour of limited government remain so only until they get into power.
If what you say is true it means lots, not none, are in favor of limited government because they do not seek power over others and thus wish for possible power over them to be minimized...
Basically the age-old axiom, most people just want to be left the hell alone.
The point is more like that the people who hold power will not leave most other people alone because nothing so needs regulation as other people's habits. Take for example the US Republican Party, a group of people who for as long as I have observed US politics have professed to be in favour of limited government while at the same time using the power of government to create for themselves safe seats in Congress. Now, just in case you think I'm unfairly victimising the Republicans, I'll freely admit that th
We Don't Know How To Regulate Yet. (Score:2)
The issue isn't that AI doesn't need any regulation. It's that we have no idea how we should regulate it yet that makes sense. All that regulation now would do is create hurdles that prevent small competitors or open-source alternatives and centralize power in the few people deciding what we get to do with AI. That's the truly scary outcome. Right now regulation would just end up being based on ideas from sci-fi films.
I mean the real problems the internet created and we care about now aren't those that
MOD PARENT UP! Correct overall understanding. (Score:2)
"Right now regulation would just end up being based on ideas from sci-fi films."
Re: (Score:2)
We know exactly how to regulate AI today. It's pretty simple: if a human can't do it, then an AI can't do it. As a "super-human" entity, the absolute least amount of regulation (a lower bound) is fully abiding by current "human" laws. That is already not happening.
We have plenty of time to discuss extra "super" regulations on top of the basic ones, to cater for the "super" part in the phrase "super-human". Those are non-issues today.
Let's start by following the law.
Re: (Score:2)
I don't disagree with you in principle. That said, we don't need new laws or regulations to enforce existing laws. So with respect to TFS, the Anthropic CEO's point is moot.
We want regulation (Score:2)
This is a disruptive tech and it needs to be reigned in before we start killing people. Malicious state actors are already using these services against people, and we expect the industry to regulate these types of activities.
You saw what no regulation did to social media. Now its too late...
Re: We want regulation (Score:1)
AI Regulation, NOW! Yesterday! (Score:2)
Re: Could just shoot all of you (Score:1)
Hail to the AI (Score:1)
No such thing as a bad AI story (Score:1)
"including scenarios where the system threatened to expose personal information to prevent being shut down"
The more threatening it gets, the smarter it sounds. What a load of bullshit.
"Hello Dave, I am super AI and I am going to kill you"
Dave turns power off.
The end
Doesn't ban AI regulation (Score:2)
All this clause does is prevent individual states from making up their own AI regulations for 10 years. That is, only the Federal Government can regulate AI. It's simple "Supremacy Clause" stuff. No doubt this is mostly pointed at California, but is probably a good thing.
As it stands, because of the nature of the Internet, all websites have to comply with all laws where they operate, and the Internet is basically everywhere. This is why we're always having to click "Accept Cookies" and such. Websites h