Anthropic CEO Says He's 'Deeply Uncomfortable' With Unelected Tech Elites Shaping AI (businessinsider.com) 73
Anthropic CEO Dario Amodei says he's uneasy about how much power a handful of tech leaders -- including himself -- have over the future of artificial intelligence. From a report: "I think I'm deeply uncomfortable with these decisions being made by a few companies, by a few people," Amodei told Anderson Cooper in a "60 Minutes" episode that aired Sunday. "Like who elected you and Sam Altman?" asked Anderson. "No one. Honestly, no one," Amodei replied.
It's called Capitalism (Score:2, Flamebait)
"I think I'm deeply uncomfortable with these decisions being made by a few companies, by a few people," Amodei told Anderson Cooper in a "60 Minutes" episode that aired Sunday. "Like who elected you and Sam Altman?" asked Anderson. "No one. Honestly, no one," Amodei replied.
When you get control of the money, you get control of the means of production. That's literally what capitalism is for.
Re: (Score:3, Informative)
What you are describing is called Plutocracy, not capitalism.
Plutocracy is rule by the rich. Nobody wants to admit that so often they lie and claim to be a Capitalist.
Capitalism is about the Free Market (Free as in choice) not ruling.
Re: (Score:3, Informative)
There is no ruling going on here.
There is no rule that makes you use it. You are free to choose competitors or not use any AI tool at all. They offer a service, and it is entirely up to you whether you want to use it, you are not being ruled.
Re: It's called Capitalism (Score:2)
Yes and you are free to live in a world where AI companies accelerate global warming and generalised surveillance. If you don't like it, just make your own world without these issues. Is this really how stupid the libertarian discourse is in 2025? Or is there more depth to it? Right now I only see the stupid part.
Re: It's called Capitalism (Score:5, Insightful)
Yes but the free market naturally trends towards consolidation, and thus plutocracy.
Re: It's called Capitalism (Score:4, Insightful)
Re: (Score:3)
Free markets do not need to allow corporations. After the problems surfaced by the Dutch East India company they were banned in this US. it wasn't until Lincoln allowed for them to exist to help build up for the war, they were allowed with limited charters. Then through legal wrangling and the 16th Amendment corporations extended their life lines. Capitalism isn't the issue corporations being seen as people is the issue.
Re: (Score:1)
AI is some sort of technology, whoever develops this technology better wins something supposedly. Governments have nothing to do with this, technology doesn't belong to the governments.
Re: It's called Capitalism (Score:2)
Most nations operate a mixed economy, not a free market.
So some regulation of markets exists almost everywhere. In cases where the political organs answer to the largest and most influential donors, you get a plutocracy. In cases where they answer to the people, you have a representative democracy.
Re: (Score:2)
I would say that every nation exists as a mixed economy, unless the government has so collapsed that it's no longer worthy of the term.
Re: (Score:2)
Plutocracy is rule by the rich.
The largest (wealthiest) block of shareholders in this country are the pension funds. Don't like the way business is being run? Complain to your union rep.
Re: (Score:2)
Re: (Score:2)
No, they aren't. The largest block of corporate shares are owned by investment funds.
Where do you think pension funds put their money?
Re: (Score:2)
Re: (Score:2)
The largest single investor in US markets is Warren Buffet, at $147 billion at the end of 2024. CalPERS has $500 billion in assets (pension and health funds). And that's just one pension fund of many. Individuals hold 38% of the market equity. People like Bezos and Musk might appear to own billions individually. But much of that is in their own companies and is relatively illiquid as a result.
If your financial planner doesn't know how the equities markets look, I'd take my money elsewhere if I were you.
Re: (Score:2)
Re: (Score:2)
That's nice. How much domestic corporate stock CalPERS own?
Outright? Not very much. Most of their holdings are through mutual funds and other types of equity holding structures. Pension funds are really big in that much-hated segment of the market: private equity. Good luck even trying to track those investments down.
By contrast mutual funds have $24,000,000,000,000 (trillion) in assets
Question: When your mutual fund sells 100 shares of Apple, who is responsible for paying tax on the capital gains?
Answer: You are. Because tho
Re: (Score:2)
Re: (Score:3)
Capitalism and plutocracy are not mutually exclusive. The US is already a plutocracy, and perhaps always has been, on the national level. Nevertheless, new businesses are formed every day by capitalism, including mom and pops and outfits like Anthropic.
Re: (Score:3)
Capitalism is about the Free Market (Free as in choice) not ruling.
False. Free Market is only one kind of Capitalism. Further, there has never actually been a free market of any significant size. It's an ideal which can only be approached, and ironically, it requires regulation to do so.
Re: (Score:2)
What you are describing is called Plutocracy, not capitalism.
Plutocracy is rule by the rich. Nobody wants to admit that so often they lie and claim to be a Capitalist.
Plutocracy is a form of government and capitalism is an economic system. They describe different things and can exist together just fine.
Capitalism is about the Free Market (Free as in choice) not ruling.
Capitalism and a Free Market also describe different things. All you need to have capitalism is private ownership of the means of production in the economy. A free market is arguably necessary to ensure capitalism doesn't devolve into a plutocracy, but it isn't a necessary component of capitalism.
Flip side (Score:5, Interesting)
Would he actually be more comfortable with our Elected non-tech elites making the big decisions?
I just don't see our legislative process, or administrative state terribly equipped to deal with shaping AI technology.
I think their job is to:
1) Ensure societies existing guard rails are uniformly and fairly applied to all, independent as to if AI has anything to do with the activity or not.
2) Respond reactively. If we identify a specific activity when coupled with AI is in some way corrosive to the society we generally want to have, then enact legislation to curb it in that area. While generally speaking anticipating problems and trying to avoid them is good practice, with something like this evolving this rapidly, I believe you usually create more issues if you go trying to solve problems you don't really know you yet have.
A good example is work force reduction, a lot of people are convinced there is going to be a huge wave of job losses that are directly attributed to AI, we don't really have any evidence of that yet. There are plenty of equally plausible explanations for unemployment rate increases right now. So if you go legislation a bunch of 'things' companies are not allowed to use ML/AI tech for and it turns out the UE uptick isn't ai related all you have done is limited productivity gains and created more economic drag.
It is important to keep in mind this is mostly just computers filling out paper work, taking down orders, and churning out questionable quality music and video clips. Hardly things we can't 'shut off' if need be. It isn't like nearly as destructive and irreversible as all kinds of development projects we often give the private sector a long leash to run with.
Elected (Score:5, Insightful)
Re: (Score:2)
I mean, the elected elites don't have time to make legislation, they are too busy going on talk shows and podcasts and being wined and dined by unelected elites.
Evidence is that often enough their patrons write the legislation and regulations and send it in for a rubber stamp.
Re: (Score:2)
Actually, the delegation of powers to various government entities and the people is arguably just as important. That's why we have democratically elected representatives instead of a pure democracy. Or founding fathers feared the resulting mob rule.
Re: (Score:1)
nobody said direct democracy, nobody is talking about direct democracy why do republicans insist on bringing up a point nobody is arguing for.
oh its because they have become anti-democratic and illiberal so they need a strawman to argue against to make those views seem palatable.
the "its a republic not a democracy" has always been a bad-faith notion and anyone who says it you can now take all their views with measured skepticism, its a signal they don't like the US system
Re: (Score:2)
Re: (Score:2)
You seem to think most of the founders agreed with Alexander Hamilton that the American people were incapable of self-government.
One of the closest governments to a "pure democracy" is Switzerland. Where the cantons hold citizen initiative votes about half a dozen times a year. But that's only on major issues. Do you think the people show up for weekly town meetings to decide the priorities for fixing potholes? Doubtful. But that's where most of the work governing occurs. The people just don't have the time for that level of detail.
Democracies work when everyone can sit around the campfire and come to a concensus about what work nee
Re: (Score:2)
Re: (Score:2)
Its the exact opposite of a belief in self government where the challenge is to make sure everyone is listened to. Not because its their "right", but because otherwise you get poor decisions based on narrow interests with limited scope. And you get decisions that serve the interests of the authority regardless of how well they serve the interests of those excluded from the discussion.
...
At one point 85% of the people dying of COVID in Minnesota had come out of a nursing home or other institution where they caught the disease. But the governor was consulting with hospital administrators from Mayo Clinic, so protective gear was reserved for hospitals and emergency responders. Do you suppose if the process had been public, that the folks operating nursing homes, the residents and their families might have pointed out that nursing home residents were the more vulnerable than hospital staff and in situations where they had almost ability to control their own exposure?
More vulnerable, yes. But if you put it up to a popular vote, the nursing home residents get the PPE and the hospital staff, exposed to higher viral loads don't.
Sorry to break it to you. But grandpa is probably going to die if he catches Covid, masks or not. But the hospitals have to stay open for a lot more people than just the Covid patients. There are heart attacks, traffic accidents, industrial injuries. And yes, here in Seattle, the public hospitals have more then their deserved share of drug overdose
Re: (Score:2)
Re: (Score:2)
You weren't in the room when the decision was made.
How do you know that?
Re: (Score:2)
Re: (Score:2)
Would he actually be more comfortable with our Elected non-tech elites making the big decisions?
Right. I'm far, far less comfortable with our current politicians and regulators shaping AI.
I think their job is to:
1) Ensure societies existing guard rails are uniformly and fairly applied to all, independent as to if AI has anything to do with the activity or not.
2) Respond reactively.
No doubt that's what we think their job ought to be. How they actually act is (1) get elected, (2) get re-elected, (3) provide favors to whoever helped with (1) and can ensure (2), and increasingly (4) enact policies or legislation to support my personal world view, facts and other people's opinions be damned.
I am totally comfortable with Corps making AI. (Score:3)
Because AI is not very important. Large Language Models are morons,not Artificially intelligent.
No intelligent human lets a LLM do anything important beyond suggesting stuff.
LLMs do a lot of minor tasks.
Yes corps could use LLMs to feed people propaganda. Guess what, they did that BEFORE LLMs and if LLMs vanish, they would still be doing it.
You've missed the elephant (Score:5, Insightful)
LLMs make a lot of mistakes but the tech bros don't care - they're using them for all sorts of things including supposed self driving cars. If the AI fucks up and causes issues , well , on appendix section 16, sub section A, paragraph 21 there'll be a clause explicitly exempting the AI company from any responsibility and in jurisdictions where that disclaimer is void then what the hell, they've made billions anyway and they'll just settle out of court.
Re:You've missed the elephant (Score:4)
they're using them for all sorts of things including supposed self driving cars. If the AI fucks up and causes issues , well , on appendix section 16, sub section A, paragraph 21 there'll be a clause explicitly exempting the AI company from any responsibility
Waymo, at least, has explicitly taken responsibility for whatever their self-driving cars do. And, honestly, it doesn't seem possible for a self-driving system's maker to avoid liability, because there's absolutely no other entity to assign it to. Tesla avoids liability (so far) by explicitly requiring human supervision. But if they ever want to claim level 4 or 5 they're going to have to take responsibility.
in jurisdictions where that disclaimer is void then what the hell, they've made billions anyway and they'll just settle out of court
I think such a disclaimer would be invalid in all jurisdictions, if they even tried to make it, which I don't think they'll do because it would be ridiculous. As for settling... yeah, that's what basically always happens with automobile accidents. The at-fault party (or their insurer) pays the costs of the injured party. No one even bothers going to court unless there's a dispute about which party was at fault, and one thing about self-driving systems is that they have incredibly-detailed sensor data, all logged and available for review, so there really won't ever be any dispute about fault.
Re: You've missed the elephant (Score:1)
Re: You've missed the elephant (Score:4, Insightful)
Sorry, but "death by GPS" is a label, not a reality. Someone decided to follow the instructions of the GPS. So this is not analogous to an actually self-driving car.
Re: (Score:2)
Your view is a bit naive. Google/Alphabet with its Maps app never had to take responsibility for "death by GPS" which is a thing.
Completely different situation. A human is making the decisions in that case. Google Maps even warns drivers not to blindly follow it. This is entirely different from a fully autonomous vehicle which is moving without any human direction or control.
But who is taking OpenAI to court for making users committ suicide? Sure, if you take my comment literally, there will be someone sueing. But they get out of it 99% of the time.
Umm, none of the suits against OpenAI for suicides have been closed out, they're all still pending. It also isn't remotely the same thing. A self-driving car operating without any human control that kills someone is clearly at fault and there is no one to shift
Re: (Score:2)
No tool is perfect. The fool is the person who think it would be perfect. And yes, there is some misleading marketing in the AI sector. Who did never have a spam mail in the inbox or a real mail in the spam folder? All tools that are "data driven" will have their weaknesses (of which some may be alleviated with more training and others are hard to fix). The point is, for what LLM *can* do there is no classical algorithm. Of course some people try to let LLM do things where classical algorithms are existing
Re: (Score:3)
No intelligent human lets a LLM do anything important beyond suggesting stuff.
And if intelligent humans were the only ones holding political power, managing infrastructure, litigating court cases, writing computer programs, etc, then we'd be fine. So obviously, we're not fine.
Yes corps could use LLMs to feed people propaganda. Guess what, they did that BEFORE LLMs and if LLMs vanish, they would still be doing it.
LLMs can do it faster and more effectively. Even now, in many cases they can do it more convincingly. Saying that LLMs don't increase the scope and effectiveness of propaganda is like saying that nuclear warheads don't increase the scope and effectiveness of military actions. The latter of which, by the way, are
business insider with the scoop! (Score:2)
Slashdot posting an article about an interview on 60 minutes.
Its like the human centipede.
Translation (Score:2)
Oh no (Score:2)
"I'm so uncomfortable with myself".
Elites posturing about their victimhood taken to yet another level.
Re: (Score:2)
The thing is, it wouldn't help things for one player to quit.
OTOH, as someone else pointed out, the government isn't exactly trustworthy either. (I consider accepting funds from lobbyist groups to be accepting bribes, just like accepting funds from individuals.)
On the third hand, open source approaches can't limit the use to which something is put.
Perhaps the "corporate powers" are the least bad choice...but that sure isn't encouraging.
Who elected Toru Iwatani to make Pac-Man? (Score:2, Informative)
People do stuff. WTF, are we supposed to have a world-wide committee meeting every time some hacker starts a random project?
Sam Altman can have his own "AI," with blackjack and hookers. If you don't want yours to have that, then write it differently. If his project is affecting yours, it's because he's on the sharp end, running into scaling issues and regulators first. Let him bear the brunt of that, so you don't have to.
The only thing that can really go wrong, is if he uses his financial influence to get
Re: (Score:3)
Equating AI with Pac-Man isn't really the intellectual flex you probably think it is.
Re: (Score:2)
The only thing that can really go wrong
That is very, very far from the only thing that can go wrong. Human extinction is within the range of possibilities.
His Whole Pitch is Safety (Score:5, Interesting)
Anthropic's entire pitch has always been safety. Innovation like this tends to favor a very few companies, and it leaves behind a whole pile of losers that also had to spend ridiculous amounts of capital in the hopes of catching the next wave. If you bet on the winning company you make a pile of money, if you pick one of the losers then the capital you invested evaporates. Anthropic has positioned itself as OpenAI, except with safeguards, and that could very well be the formula that wins the jackpot. Historically, litigation and government sponsorship have been instrumental in picking winners.
However, as things currently stand, Anthropic is unlikely to win on technical merits over its competition. So Dario's entire job as a CEO is basically to get the government involved. If he can create enough doubt about the people that are currently making decisions in AI circles that the government gets involved, either directly through government investment, or indirectly through legislation, then his firm has a chance at grabbing the brass ring. That's not to say that he is wrong, he might even be sincere. It is just that it isn't surprising that his pitch is that AI has the potential to be wildly dangerous and we need to think about safety. That's essentially the only path that makes his firm a viable long term player.
Re: (Score:2)
Apparently, "safeguards" mean "don't let the AI say something that hurts feels" rather than "don't let the AI act in a manner that is dangerous and unlawful." I say this because, apparently, Anthropic's systems have been leveraged by nation state actors for hacking campaigns (though details of this are minimal and read like marketing spiel about how awesome their tools are rather than giving information on what actually happened).
Re: (Score:2)
It is just that it isn't surprising that his pitch is that AI has the potential to be wildly dangerous and we need to think about safety. That's essentially the only path that makes his firm a viable long term player.
If you believe that AI has the potential to be wildly dangerous, that may be the only path that makes the human race a viable long term player.
And I've yet to see any well thought-out argument showing that AI doesn't have the potential to be wildly dangerous. If anyone has one, please post it!
The closest I've seen are:
1. Humans are incapable of creating AGI, so the AI companies are simply going to fail.
2. There is a hard upper limit on intelligence, and it's not far above human-level, so even when
Re: (Score:2)
Good take. Amodei and others left OpenAI because (I imagine) Altman is intolerable, and because they know they're smarter and don't want to share revenue with a sales pitch guy who suddenly thinks he's the AI messiah like Jared Leto' Wallace character in Blade Runner 2049.
Anthropic CEO Dario Amodei (Score:3)
His surname is one transposition away from "AI Mode".
The answer is easy (Score:1)
Release your models open source and open weight
This tech should not be controlled by monopolists or governments
It should be available to all
It's called a "conscience", Dario... (Score:2)
Anthropic CEO Dario Amodei says he's uneasy about how much power a handful of tech leaders -- including himself -- have over the future of artificial intelligence.
It's a bit comforting that a rich tech CEO has such thoughts, but it's also very unsettling that he seems to find those thoughts rather novel and devoid of any emotional connection.
"Gee, there's something wrong with this, and something very dangerous - let's set it loose on the world and deploy it widely while we're still experimenting with it!" said no sensible and caring human being, ever.
What problem are we solving, really? (Score:2)
My concern is that they are rushing ahead with implementation without ensuring the technology actually works reliably. The world is their beta site, maybe even late stage alpha. The true miracle continues to be that anything works at all.
It doesn't matter (Score:2)
These questions only matter to people who believe we will find some superintelligence. Who builds your LLM does not matter that much.
Re: (Score:2)
It's not going to be an LLM. The LLM is just what it's going to use to talk to you. But "world models" are being built, and that is going to be the basis of real intelligence.
Re: (Score:2)
I am not really believing in things like "superintelligence". But I believe in things like LLM (possible through world models) becoming clever enough to "fake" all intelligence they need to do the job. I want tools, not slaves.
Re: (Score:2)
There will be tools. But there will also be the more general intelligence. One can argue about the time-line, and that's quite reasonable, but denying it requires accepting spiritualism or some such.
For that matter, people are often used as tools. It's not an "either/or" choice.
Re: (Score:2)
I don't claim it won't happen because of a missing soul or similar, but I think the compute/memory/sensory input is huge and for *super*intelligence you need more/other inputs than humans.
Given enough compute one could now combine human input (leading to human intelligence) and LLM (leading to systems that can in a limited way deal with a lot of knowledge) but would just obtain the same as the combination human/LLM can give you now. To be better than that one would need for example to be able to get the spe
Re: (Score:2)
I think your model is only one of several alternatives. I don't foresee a unitary intelligence as likely, but an executive function delegating different tasks to different experts depending on context. And it can't be limited to language, it needs to interact more directly with the physical world. But we're already taking steps in that direction.
Yes, it's difficult. Perhaps it will take awhile. But there's absolutely no reason to expect human intelligence to remain the top measure. (Even now there are
Re: (Score:2)
Yes and for every "gotcha!" where an LLM is worse than an amateur in a field, you find ten fields in which the amateur is a absolute beginner while the LLM is also on amateur level in that field. The point why people use these things is not that they are the absolute experts in one field, but because they are on intermediate level in many fields.
If you have a look into the questions of some of the standard benchmarks you quickly find some you can't answer. So if a LLM gets 60% on a benchmark that I get 90%
Not laywers.... (Score:2)
AI F!sup Lawyers jobs. They won't be fully replacing them anytime soon. Hell REAL lawyers F!up with AI.
https://www.fox10tv.com/2025/1... [fox10tv.com]
https://calmatters.org/economy... [calmatters.org]
https://www.msba.o [msba.org]
Nothing new (Score:2)
All the noise about AI alignment and disruption is a red herring. Alignment is easy: you have lots of independent goals and do your best to trade off among them, and have some sort of constitution or test suite to keep you from veering off course. Not being bad is more important than being good. What isn't clear is that alignment will be used, even though it is easy. Right now things are roughly set up in the USA to benefit the people, but if something or someone gets enough power they can stop caring what