Forgot your password?
typodupeerror
AI Technology

Anthropic CEO Says He's 'Deeply Uncomfortable' With Unelected Tech Elites Shaping AI (businessinsider.com) 73

Anthropic CEO Dario Amodei says he's uneasy about how much power a handful of tech leaders -- including himself -- have over the future of artificial intelligence. From a report: "I think I'm deeply uncomfortable with these decisions being made by a few companies, by a few people," Amodei told Anderson Cooper in a "60 Minutes" episode that aired Sunday. "Like who elected you and Sam Altman?" asked Anderson. "No one. Honestly, no one," Amodei replied.
This discussion has been archived. No new comments can be posted.

Anthropic CEO Says He's 'Deeply Uncomfortable' With Unelected Tech Elites Shaping AI

Comments Filter:
  • "I think I'm deeply uncomfortable with these decisions being made by a few companies, by a few people," Amodei told Anderson Cooper in a "60 Minutes" episode that aired Sunday. "Like who elected you and Sam Altman?" asked Anderson. "No one. Honestly, no one," Amodei replied.

    When you get control of the money, you get control of the means of production. That's literally what capitalism is for.

    • Re: (Score:3, Informative)

      by gurps_npc ( 621217 )

      What you are describing is called Plutocracy, not capitalism.

      Plutocracy is rule by the rich. Nobody wants to admit that so often they lie and claim to be a Capitalist.

      Capitalism is about the Free Market (Free as in choice) not ruling.

      • Re: (Score:3, Informative)

        by paulpach ( 798828 )

        There is no ruling going on here.
        There is no rule that makes you use it. You are free to choose competitors or not use any AI tool at all. They offer a service, and it is entirely up to you whether you want to use it, you are not being ruled.

        • Yes and you are free to live in a world where AI companies accelerate global warming and generalised surveillance. If you don't like it, just make your own world without these issues. Is this really how stupid the libertarian discourse is in 2025? Or is there more depth to it? Right now I only see the stupid part.

      • by i_ate_god ( 899684 ) on Monday November 17, 2025 @11:42AM (#65800417)

        Yes but the free market naturally trends towards consolidation, and thus plutocracy.

        • by Albinoman ( 584294 ) on Monday November 17, 2025 @01:08PM (#65800613)
          Capitalism is an economic system. Plutocracy is a type of government. Voting is not based on wealth. Just because the rich hold extra sway doesn't make it a plutocracy, it just means our politicians are corrupt.
        • Free markets do not need to allow corporations. After the problems surfaced by the Dutch East India company they were banned in this US. it wasn't until Lincoln allowed for them to exist to help build up for the war, they were allowed with limited charters. Then through legal wrangling and the 16th Amendment corporations extended their life lines. Capitalism isn't the issue corporations being seen as people is the issue.

        • AI is some sort of technology, whoever develops this technology better wins something supposedly. Governments have nothing to do with this, technology doesn't belong to the governments.

      • Most nations operate a mixed economy, not a free market.

        So some regulation of markets exists almost everywhere. In cases where the political organs answer to the largest and most influential donors, you get a plutocracy. In cases where they answer to the people, you have a representative democracy.

        • by HiThere ( 15173 )

          I would say that every nation exists as a mixed economy, unless the government has so collapsed that it's no longer worthy of the term.

      • by PPH ( 736903 )

        Plutocracy is rule by the rich.

        The largest (wealthiest) block of shareholders in this country are the pension funds. Don't like the way business is being run? Complain to your union rep.

        • Comment removed based on user account deletion
          • by PPH ( 736903 )

            No, they aren't. The largest block of corporate shares are owned by investment funds.

            Where do you think pension funds put their money?

            • Comment removed based on user account deletion
              • by PPH ( 736903 )

                The largest single investor in US markets is Warren Buffet, at $147 billion at the end of 2024. CalPERS has $500 billion in assets (pension and health funds). And that's just one pension fund of many. Individuals hold 38% of the market equity. People like Bezos and Musk might appear to own billions individually. But much of that is in their own companies and is relatively illiquid as a result.

                If your financial planner doesn't know how the equities markets look, I'd take my money elsewhere if I were you.

                • Comment removed based on user account deletion
                  • by PPH ( 736903 )

                    That's nice. How much domestic corporate stock CalPERS own?

                    Outright? Not very much. Most of their holdings are through mutual funds and other types of equity holding structures. Pension funds are really big in that much-hated segment of the market: private equity. Good luck even trying to track those investments down.

                    By contrast mutual funds have $24,000,000,000,000 (trillion) in assets

                    ... which they manage on behalf of their investors. The funds don't actually own the underlying securities.

                    Question: When your mutual fund sells 100 shares of Apple, who is responsible for paying tax on the capital gains?

                    Answer: You are. Because tho

      • Capitalism and plutocracy are not mutually exclusive. The US is already a plutocracy, and perhaps always has been, on the national level. Nevertheless, new businesses are formed every day by capitalism, including mom and pops and outfits like Anthropic.

      • Capitalism is about the Free Market (Free as in choice) not ruling.

        False. Free Market is only one kind of Capitalism. Further, there has never actually been a free market of any significant size. It's an ideal which can only be approached, and ironically, it requires regulation to do so.

      • by ranton ( 36917 )

        What you are describing is called Plutocracy, not capitalism.

        Plutocracy is rule by the rich. Nobody wants to admit that so often they lie and claim to be a Capitalist.

        Plutocracy is a form of government and capitalism is an economic system. They describe different things and can exist together just fine.

        Capitalism is about the Free Market (Free as in choice) not ruling.

        Capitalism and a Free Market also describe different things. All you need to have capitalism is private ownership of the means of production in the economy. A free market is arguably necessary to ensure capitalism doesn't devolve into a plutocracy, but it isn't a necessary component of capitalism.

  • Flip side (Score:5, Interesting)

    by DarkOx ( 621550 ) on Monday November 17, 2025 @11:33AM (#65800377) Journal

    Would he actually be more comfortable with our Elected non-tech elites making the big decisions?

    I just don't see our legislative process, or administrative state terribly equipped to deal with shaping AI technology.

    I think their job is to:
    1) Ensure societies existing guard rails are uniformly and fairly applied to all, independent as to if AI has anything to do with the activity or not.
    2) Respond reactively. If we identify a specific activity when coupled with AI is in some way corrosive to the society we generally want to have, then enact legislation to curb it in that area. While generally speaking anticipating problems and trying to avoid them is good practice, with something like this evolving this rapidly, I believe you usually create more issues if you go trying to solve problems you don't really know you yet have.

    A good example is work force reduction, a lot of people are convinced there is going to be a huge wave of job losses that are directly attributed to AI, we don't really have any evidence of that yet. There are plenty of equally plausible explanations for unemployment rate increases right now. So if you go legislation a bunch of 'things' companies are not allowed to use ML/AI tech for and it turns out the UE uptick isn't ai related all you have done is limited productivity gains and created more economic drag.

    It is important to keep in mind this is mostly just computers filling out paper work, taking down orders, and churning out questionable quality music and video clips. Hardly things we can't 'shut off' if need be. It isn't like nearly as destructive and irreversible as all kinds of development projects we often give the private sector a long leash to run with.

    • Elected (Score:5, Insightful)

      by JBMcB ( 73720 ) on Monday November 17, 2025 @11:53AM (#65800439)
      It wouldn't be the elected elites. It would be the bureaucrats making the rules whom, after tailoring regulations to favor some companies over others, would then go work for those companies.
      • by shess ( 31691 )

        I mean, the elected elites don't have time to make legislation, they are too busy going on talk shows and podcasts and being wined and dined by unelected elites.
          Evidence is that often enough their patrons write the legislation and regulations and send it in for a rubber stamp.

    • Would he actually be more comfortable with our Elected non-tech elites making the big decisions?

      Right. I'm far, far less comfortable with our current politicians and regulators shaping AI.

      I think their job is to:
      1) Ensure societies existing guard rails are uniformly and fairly applied to all, independent as to if AI has anything to do with the activity or not.
      2) Respond reactively.

      No doubt that's what we think their job ought to be. How they actually act is (1) get elected, (2) get re-elected, (3) provide favors to whoever helped with (1) and can ensure (2), and increasingly (4) enact policies or legislation to support my personal world view, facts and other people's opinions be damned.

  • Because AI is not very important. Large Language Models are morons,not Artificially intelligent.

    No intelligent human lets a LLM do anything important beyond suggesting stuff.

    LLMs do a lot of minor tasks.

    Yes corps could use LLMs to feed people propaganda. Guess what, they did that BEFORE LLMs and if LLMs vanish, they would still be doing it.

    • by Viol8 ( 599362 ) on Monday November 17, 2025 @11:45AM (#65800421) Homepage

      LLMs make a lot of mistakes but the tech bros don't care - they're using them for all sorts of things including supposed self driving cars. If the AI fucks up and causes issues , well , on appendix section 16, sub section A, paragraph 21 there'll be a clause explicitly exempting the AI company from any responsibility and in jurisdictions where that disclaimer is void then what the hell, they've made billions anyway and they'll just settle out of court.

      • by swillden ( 191260 ) <shawn-ds@willden.org> on Monday November 17, 2025 @02:05PM (#65800789) Journal

        they're using them for all sorts of things including supposed self driving cars. If the AI fucks up and causes issues , well , on appendix section 16, sub section A, paragraph 21 there'll be a clause explicitly exempting the AI company from any responsibility

        Waymo, at least, has explicitly taken responsibility for whatever their self-driving cars do. And, honestly, it doesn't seem possible for a self-driving system's maker to avoid liability, because there's absolutely no other entity to assign it to. Tesla avoids liability (so far) by explicitly requiring human supervision. But if they ever want to claim level 4 or 5 they're going to have to take responsibility.

        in jurisdictions where that disclaimer is void then what the hell, they've made billions anyway and they'll just settle out of court

        I think such a disclaimer would be invalid in all jurisdictions, if they even tried to make it, which I don't think they'll do because it would be ridiculous. As for settling... yeah, that's what basically always happens with automobile accidents. The at-fault party (or their insurer) pays the costs of the injured party. No one even bothers going to court unless there's a dispute about which party was at fault, and one thing about self-driving systems is that they have incredibly-detailed sensor data, all logged and available for review, so there really won't ever be any dispute about fault.

        • Your view is a bit naive. Google/Alphabet with its Maps app never had to take responsibility for "death by GPS" which is a thing. Tech is getting better, more human, more trustworthy. But who is taking OpenAI to court for making users committ suicide? Sure, if you take my comment literally, there will be someone sueing. But they get out of it 99% of the time. And guess what, the "ChatGPT can make mistakes" disclaimer is usually enough to get the job done.
          • by HiThere ( 15173 ) <(ten.knilhtrae) (ta) (nsxihselrahc)> on Monday November 17, 2025 @03:56PM (#65801107)

            Sorry, but "death by GPS" is a label, not a reality. Someone decided to follow the instructions of the GPS. So this is not analogous to an actually self-driving car.

          • Your view is a bit naive. Google/Alphabet with its Maps app never had to take responsibility for "death by GPS" which is a thing.

            Completely different situation. A human is making the decisions in that case. Google Maps even warns drivers not to blindly follow it. This is entirely different from a fully autonomous vehicle which is moving without any human direction or control.

            But who is taking OpenAI to court for making users committ suicide? Sure, if you take my comment literally, there will be someone sueing. But they get out of it 99% of the time.

            Umm, none of the suits against OpenAI for suicides have been closed out, they're all still pending. It also isn't remotely the same thing. A self-driving car operating without any human control that kills someone is clearly at fault and there is no one to shift

      • by allo ( 1728082 )

        No tool is perfect. The fool is the person who think it would be perfect. And yes, there is some misleading marketing in the AI sector. Who did never have a spam mail in the inbox or a real mail in the spam folder? All tools that are "data driven" will have their weaknesses (of which some may be alleviated with more training and others are hard to fix). The point is, for what LLM *can* do there is no classical algorithm. Of course some people try to let LLM do things where classical algorithms are existing

    • No intelligent human lets a LLM do anything important beyond suggesting stuff.

      And if intelligent humans were the only ones holding political power, managing infrastructure, litigating court cases, writing computer programs, etc, then we'd be fine. So obviously, we're not fine.

      Yes corps could use LLMs to feed people propaganda. Guess what, they did that BEFORE LLMs and if LLMs vanish, they would still be doing it.

      LLMs can do it faster and more effectively. Even now, in many cases they can do it more convincingly. Saying that LLMs don't increase the scope and effectiveness of propaganda is like saying that nuclear warheads don't increase the scope and effectiveness of military actions. The latter of which, by the way, are

  • Slashdot posting an article about an interview on 60 minutes.

    Its like the human centipede.

  • It pains me a lot, but I find solace laying on my pile of cash.
  • "I'm so uncomfortable with myself".

    Elites posturing about their victimhood taken to yet another level.

    • by HiThere ( 15173 )

      The thing is, it wouldn't help things for one player to quit.

      OTOH, as someone else pointed out, the government isn't exactly trustworthy either. (I consider accepting funds from lobbyist groups to be accepting bribes, just like accepting funds from individuals.)

      On the third hand, open source approaches can't limit the use to which something is put.

      Perhaps the "corporate powers" are the least bad choice...but that sure isn't encouraging.

  • People do stuff. WTF, are we supposed to have a world-wide committee meeting every time some hacker starts a random project?

    Sam Altman can have his own "AI," with blackjack and hookers. If you don't want yours to have that, then write it differently. If his project is affecting yours, it's because he's on the sharp end, running into scaling issues and regulators first. Let him bear the brunt of that, so you don't have to.

    The only thing that can really go wrong, is if he uses his financial influence to get

    • Equating AI with Pac-Man isn't really the intellectual flex you probably think it is.

    • The only thing that can really go wrong

      That is very, very far from the only thing that can go wrong. Human extinction is within the range of possibilities.

  • by Jason Earl ( 1894 ) on Monday November 17, 2025 @12:21PM (#65800501) Homepage Journal

    Anthropic's entire pitch has always been safety. Innovation like this tends to favor a very few companies, and it leaves behind a whole pile of losers that also had to spend ridiculous amounts of capital in the hopes of catching the next wave. If you bet on the winning company you make a pile of money, if you pick one of the losers then the capital you invested evaporates. Anthropic has positioned itself as OpenAI, except with safeguards, and that could very well be the formula that wins the jackpot. Historically, litigation and government sponsorship have been instrumental in picking winners.

    However, as things currently stand, Anthropic is unlikely to win on technical merits over its competition. So Dario's entire job as a CEO is basically to get the government involved. If he can create enough doubt about the people that are currently making decisions in AI circles that the government gets involved, either directly through government investment, or indirectly through legislation, then his firm has a chance at grabbing the brass ring. That's not to say that he is wrong, he might even be sincere. It is just that it isn't surprising that his pitch is that AI has the potential to be wildly dangerous and we need to think about safety. That's essentially the only path that makes his firm a viable long term player.

    • by Zak3056 ( 69287 )

      Apparently, "safeguards" mean "don't let the AI say something that hurts feels" rather than "don't let the AI act in a manner that is dangerous and unlawful." I say this because, apparently, Anthropic's systems have been leveraged by nation state actors for hacking campaigns (though details of this are minimal and read like marketing spiel about how awesome their tools are rather than giving information on what actually happened).

    • It is just that it isn't surprising that his pitch is that AI has the potential to be wildly dangerous and we need to think about safety. That's essentially the only path that makes his firm a viable long term player.

      If you believe that AI has the potential to be wildly dangerous, that may be the only path that makes the human race a viable long term player.

      And I've yet to see any well thought-out argument showing that AI doesn't have the potential to be wildly dangerous. If anyone has one, please post it!

      The closest I've seen are:

      1. Humans are incapable of creating AGI, so the AI companies are simply going to fail.
      2. There is a hard upper limit on intelligence, and it's not far above human-level, so even when

    • Good take. Amodei and others left OpenAI because (I imagine) Altman is intolerable, and because they know they're smarter and don't want to share revenue with a sales pitch guy who suddenly thinks he's the AI messiah like Jared Leto' Wallace character in Blade Runner 2049.

  • by Rei ( 128717 ) on Monday November 17, 2025 @12:35PM (#65800519) Homepage

    His surname is one transposition away from "AI Mode".

  • Release your models open source and open weight
    This tech should not be controlled by monopolists or governments
    It should be available to all

  • Anthropic CEO Dario Amodei says he's uneasy about how much power a handful of tech leaders -- including himself -- have over the future of artificial intelligence.

    It's a bit comforting that a rich tech CEO has such thoughts, but it's also very unsettling that he seems to find those thoughts rather novel and devoid of any emotional connection.

    "Gee, there's something wrong with this, and something very dangerous - let's set it loose on the world and deploy it widely while we're still experimenting with it!" said no sensible and caring human being, ever.

  • My concern is that they are rushing ahead with implementation without ensuring the technology actually works reliably. The world is their beta site, maybe even late stage alpha. The true miracle continues to be that anything works at all.

  • These questions only matter to people who believe we will find some superintelligence. Who builds your LLM does not matter that much.

    • by HiThere ( 15173 )

      It's not going to be an LLM. The LLM is just what it's going to use to talk to you. But "world models" are being built, and that is going to be the basis of real intelligence.

      • by allo ( 1728082 )

        I am not really believing in things like "superintelligence". But I believe in things like LLM (possible through world models) becoming clever enough to "fake" all intelligence they need to do the job. I want tools, not slaves.

        • by HiThere ( 15173 )

          There will be tools. But there will also be the more general intelligence. One can argue about the time-line, and that's quite reasonable, but denying it requires accepting spiritualism or some such.

          For that matter, people are often used as tools. It's not an "either/or" choice.

          • by allo ( 1728082 )

            I don't claim it won't happen because of a missing soul or similar, but I think the compute/memory/sensory input is huge and for *super*intelligence you need more/other inputs than humans.

            Given enough compute one could now combine human input (leading to human intelligence) and LLM (leading to systems that can in a limited way deal with a lot of knowledge) but would just obtain the same as the combination human/LLM can give you now. To be better than that one would need for example to be able to get the spe

            • by HiThere ( 15173 )

              I think your model is only one of several alternatives. I don't foresee a unitary intelligence as likely, but an executive function delegating different tasks to different experts depending on context. And it can't be limited to language, it needs to interact more directly with the physical world. But we're already taking steps in that direction.

              Yes, it's difficult. Perhaps it will take awhile. But there's absolutely no reason to expect human intelligence to remain the top measure. (Even now there are

              • by allo ( 1728082 )

                Yes and for every "gotcha!" where an LLM is worse than an amateur in a field, you find ten fields in which the amateur is a absolute beginner while the LLM is also on amateur level in that field. The point why people use these things is not that they are the absolute experts in one field, but because they are on intermediate level in many fields.

                If you have a look into the questions of some of the standard benchmarks you quickly find some you can't answer. So if a LLM gets 60% on a benchmark that I get 90%

  • "If we look at entry-level consultants, lawyers, financial professionals â" you know, many of the white-collar industries â" a lot of what they do, AI models are already quite good at," he told Anderson. "Without intervention, it's hard to imagine that there won't be some significant job impact there.""

    AI F!sup Lawyers jobs. They won't be fully replacing them anytime soon. Hell REAL lawyers F!up with AI.

    https://www.fox10tv.com/2025/1... [fox10tv.com]
    https://calmatters.org/economy... [calmatters.org]
    https://www.msba.o [msba.org]
  • All the noise about AI alignment and disruption is a red herring. Alignment is easy: you have lots of independent goals and do your best to trade off among them, and have some sort of constitution or test suite to keep you from veering off course. Not being bad is more important than being good. What isn't clear is that alignment will be used, even though it is easy. Right now things are roughly set up in the USA to benefit the people, but if something or someone gets enough power they can stop caring what

You had mail, but the super-user read it, and deleted it!

Working...