Forgot your password?
typodupeerror
Programming

Anthropic's CEO Says AI and Software Engineers Are in 'Centaur Phase' - But It Won't Last Long (businessinsider.com) 147

Human software engineers and AI are currently in a "centaur phase" -- a reference to the mythical half-human, half-horse creature, where the combination outperforms either working alone -- but the window may be "very brief," Anthropic CEO Dario Amodei said on a podcast. He drew on chess as precedent: 15 to 20 years ago, a human checking AI's moves could beat a standalone AI or human, but machines have since surpassed that arrangement entirely.

Amodei said the same transition would play out in software engineering, and warned that entry-level white-collar disruption is "happening over low single-digit numbers of years."
This discussion has been archived. No new comments can be posted.

Anthropic's CEO Says AI and Software Engineers Are in 'Centaur Phase' - But It Won't Last Long

Comments Filter:
  • I wish that... (Score:5, Insightful)

    by MpVpRb ( 1423381 ) on Monday February 16, 2026 @05:10PM (#65992884)

    ...these guys would stop trying to predict an increasingly unpredictable future and concentrate on accurately reporting real progress

    • ...these guys would stop trying to predict an increasingly unpredictable future and concentrate on accurately reporting real progress

      The CEOs? Their jobs are mostly to behave as hype men for the new hotness, which may or may not ever come, but so long as it keeps sucking up dollars and mindshare, they're fulfilling their purpose.

      • Or to put it another way - Selling their products/services is a key part of their job. Another is selling their company to investors.
    • Re:I wish that... (Score:4, Insightful)

      by 93 Escort Wagon ( 326346 ) on Monday February 16, 2026 @05:52PM (#65992970)

      Like other "AI" company CEOs, Anthropic's CEO sees the purpose of his role to do everything he can to sell the company for many billions, as quickly as possible, so he can join the "absurdly rich tech bro" club. That's it, there is no secondary purpose in his mind.

      That's not what he says, of course, if people ask him about it though.

      • Re: (Score:3, Insightful)

        Actually, he's quite different from other AI CEOs in some important ways. For one, he's not mixing advertising and AI, and for another he refuses to let the military use Claude for unethical (in his view) purposes.

        That certainly does not make him a saint or anything, but let's not pretend he's the exact same as (say) the OpenAI CEO.

        • by twms2h ( 473383 )

          This!

        • by gtall ( 79522 )

          More accurately, unless things have changed this morning, he is in negotiations with DoD on limits for Claude. From memory, he wants no use on U.S. citizens and no use for kinetic weapons. Naturally our Sec. of Movie War is throwing a tantrum. It remains to be seen whether Anthropic will chose their ideals over DoD money and lack of restrictions.

      • ...Anthropic's CEO sees the purpose of his role to do everything he can to sell the company for many billions, as quickly as possible...

        I really doubt this. Anthropic is not some startup. They had $14 billion in revenue in 2025, with worse models than they have now. They're projected to have $70 billion in revenue by 2028 and that seems entirely reasonable for the path they're on. The kind of growth they are having would make them one of the largest tech companies in the world in 5 years. Why would they sel

    • by gweihir ( 88907 )

      on accurately reporting real progress

      Well, they do not want to trigger the crash, do they?

    • by allo ( 1728082 )

      The progess is reported. You just should read the papers of scientists instead of the press releases of companies.
      Summary of progress: Currently things are still improving rapidly. Outlook: Nobody knows, but some new breakthrough would be nice.

  • by houstonbofh ( 602064 ) on Monday February 16, 2026 @05:14PM (#65992890)
    It's not like he has a vested interest or anything. And so many other technologies that were touted to replace humans have always completely done so in the past. I mean, no one works are car washes or fast food anymore. They are completely human free.
  • by karmawarrior ( 311177 ) on Monday February 16, 2026 @05:15PM (#65992892) Journal

    Anthropic's CEO then went home to his newborn and his one year old son, and announced that he will have 1024 children in the next ten years.

  • by greytree ( 7124971 ) on Monday February 16, 2026 @05:17PM (#65992896)
    When an AI can do everything a human software engineer can do, it will be able to do anything a human can do.

    Doing what a human software engineer can do requires knowledge of real life, the ability to learn new things like a human does, the ability to see the big picture based on living a real life.

    Someday, an AI will do that. But LLMs won't.
    • When an AI can do everything a human software engineer can do, it will be able to do anything a human can do.

      Doing what a human software engineer can do requires knowledge of real life, the ability to learn new things like a human does, the ability to see the big picture based on living a real life.

      Wrong. The best code and progging AI will never dance Tango as good as I do. It doesn't have a body. It also won't have my priorities and motivations.

      _But_ it is very close that it will be better at progging than

      • by gtall ( 79522 )

        This may be a comment ancillary to what you wrote but I wonder if similar issues raises by https://www.theregister.com/20... [theregister.com] occur for programming. They use the term "semantic ablation" to help explain why AI prose is so vapid. I think of semantic ablation as the evil twin of hallucination; the latter introduces new "information" while the former subtracts "information". The major points from that article (verbatim) are

        Stage 1: Metaphoric cleansing. The AI identifies unconventional metaphors or visceral ima

  • The current need for humans in the loop is much less about performance and much more about accuracy. Amodei must believe (in public anyway) that the hallucination problem is nearly solved.
    • I'm not sure why people keep insisting on believing what these guys say has any basis in fact - or assuming that the tech bro isn't lying through his teeth.

      I recall a story on this site, probably two years ago, where an AI company CEO stated categorically that the "hallucination problem" was solved.

      • by 93 Escort Wagon ( 326346 ) on Monday February 16, 2026 @07:14PM (#65993156)

        Just to follow up... I was referring to this Slashdot story from October 2024 [slashdot.org].

        "The most significant change we're seeing over the past 18 to 20 months is the accuracy of those answers from the large language models," gushed [Baidu CEO Robin Li] at last week's Harvard Business Review Future of Business Conference. "I think over the past 18 months, that problem has pretty much been solved -- meaning when you talk to a chatbot, a frontier model-based chatbot, you can basically trust the answer," he added.

    • I wish it were solved. I wouldn't need to constantly hold it's hand.

    • LLMs literally hallucinate *everything*--that's how they do what they do. It's just that the percentage of hallucinations that are right, is increasing. But they won't ever stop, because it's their nature to hallucinate. Stopping hallucinating, would be like a CPU stopping processing.

    • by gweihir ( 88907 )

      LLMs will always hallucinate. That comes from the math they are using and this problem cannot be fixed.

  • by ebunga ( 95613 ) on Monday February 16, 2026 @05:19PM (#65992900)

    They said AI was replacing developers last year. Extrapolating, by this time next year AI will be completely gone and it will take two humans to do the work of one human.

    • AI did replace them to a frightening degree in 2025, have you not been paying attention to the hiring & layoff numbers?
      • by ebunga ( 95613 )

        Most of the "because AI" headcount reductions were complete bullshit, or a "realignment" of the companies towards the AI future. Assuming it ever actually happens. I mean, they spent billions and promised hundreds of billions more.

        Now, I certainly wouldn't want to work at a company that's making a shitty SaaS replacement for a dBase III application or excel spreadsheets or something. That sort of thing is doomed no matter how you look at it. If AI does even 5% of what they're replacing, you just ask Claude

    • by gweihir ( 88907 )

      They also said that 2 years ago. Instead, what is actually happening, is that the claims of extreme productivity gains have mostly vanished and the reports of real failures are getting more.

  • ... remember the contingent of naysayers who said the internet was a passing fad?

    For those of us who were lucky enough to work through that period and not have their industry destroyed, it probably seemed like a dream come true. Slashdot is full of such people.

    But the outcome for most others was terrible. Now it's coming for us.
    • ... remember the contingent of naysayers who said the internet was a passing fad?

      I don't remember that. Was that a thing?

      • by ebunga ( 95613 )

        Yes, and in some ways they were right. What they got wrong was that its replacement was also the internet rather than everyone going to back to... BBSs I guess.

      • I don't remember that. Was that a thing?

        Yes. Here's a sample. [newstatesman.com] Lots more where that came from.

        • Fantastic article! Clifford Stoll (first person quoted) has a hilarious blurb in his wikipedia article:

          When the article resurfaced on Boing Boing in 2010, Stoll left a self-deprecating comment: "Of my many mistakes, flubs, and howlers, few have been as public as my 1995 howler ... Now, whenever I think I know what's happening, I temper my thoughts: Might be wrong, Cliff ..."[13]

          I was a teenager in the 1990s. My memories of that era of the Internet, and the adults I associated with, was pretty much unbridled enthusiasm! I was running DOS and OS/2 in the early 90s, and I got on several BBSes that had mail relays and a dialup connection to a local university that got me directly on the Internet through a modem terminal (Usenet, lynx, etc.). Great time to believe alive.

          This article r

        • by EvilSS ( 557649 )

          ... And the freedom of digital networks will make government more democratic.

          Baloney. Do our computer pundits lack all common sense?

          Well he was right about that one, though probably not for the reasons he thought.

      • by allo ( 1728082 )

        Even Microsoft thought so.

    • by StormReaver ( 59959 ) on Monday February 16, 2026 @07:40PM (#65993198)

      ...remember the contingent of naysayers who said the internet was a passing fad?

      I remember both of them, and I remember thinking they were full of shit. Aside from Bill Gates and some other forgettable nobody, no one thought the Internet was a passing fad. However, just about everyone on Slashdot thought that most of the companies in the dot com boom were doomed to failure, even (especially?) as paid pundits talked up the brilliance of the companies' CEOs.

      I see history repeating. The underlying technology has some cool uses (though not worth the catastrophic cost of building the models), but it's pets.com all over again.

    • by gweihir ( 88907 )

      What a completely insightless comparison.

  • by Lavandera ( 7308312 ) on Monday February 16, 2026 @05:41PM (#65992942)

    It did not... ?

    I wonder why... they should know best how to do it and yet they have not...

    seriously... For me so far AI is just a better interface to web/stack overflow... and still generating lots of crap...

    Sure it saves time on some simple things like unit tests generation but even there it has some issues...

    • by CAIMLAS ( 41445 )

      If that's all it is to you, you're either using inferior tooling or don't know what you're doing.

      Most developers can't get out of their own way, with regard to using LLMs. They think they know best and try to steer the AI like a child trying to guide ants with a stick, instead of a trail of honey. They're usually too stuck in the "what" instead of the big picture "how" and end up with inferior results as a consequence.

      • I am not stuck at "what". Figuring out "what" is actually majority of my work. Coding is simple once I know "what" should be done.

        However from what I've tried AI has even problems with "how" producing long, too complex and hard to understand code. Producing unit tests with 1+2=4

        AI tools work great when doing things for which lots of good examples are on the internet.

        For me it really feels like working with a Junior Dev.

    • "It did not... ?

      I wonder why... they should know best how to do it and yet they have not..."

      Because time proceeds at a finite pace.

    • I was at a Q&A with an Anthropic engineer and he said they have all ditched their IDEs and just use Claude Code now. That alone seems like a radical step (and possibly naive if true) but there is a direction this stuff is going in. The leading sign won't be that they ditch software engineers, it'll be that they stop hiring low-level engineers.
  • who are Reverse Centaurs. Definition: A reverse centaur is a machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.

    From: https://doctorow.medium.com/https-pluralistic-net-2025-12-05-pop-that-bubble-u-washington-8b6b75abc28e

    Now, because the AI is a statistical inference engine, because all it can do is predict what word will come next based on all the words that have been typed in the past, it will “hallucinate” a library called lib.pdf.text.par

    • by Moridineas ( 213502 ) on Monday February 16, 2026 @06:25PM (#65993028) Journal

      I read about 2/3 of that before I got intensely bored. I have yet to find Corey Doctorow ever say something interesting--it's like his super power is talking about technology to people who don't know anything, and making them feel good about reading it. I don't know if he's ever written for Wired, but I get the same feeling when I read his tripe that I get when I read Wired. Way back in highschool one of my best friends was obsessed with Wired and later became a columnist for them for a couple of years. He was by far the least technical of my friend circle. He was the only one of us who couldn't code, etc. He is a nice writer, but his columns were just geekporn pop culture tripe.

      Now, because the AI is a statistical inference engine, because all it can do is predict what word will come next based on all the words that have been typed in the past

      This is just meaningless. Yes, LLMs (he also fails to distinguish between LLMs and anything else) are built on statistical modeling, but the important part is that the models produce unexpected emergent behaviors or results.

      I don't even disagree with some of the issues that he identifies (though I've personally never seen an LLM hallucinate a library), but when he writes with geekcoded language like "AI bros" and repeatedly makes even stupider statements like "I just don't think AI is that important of a technology," I honestly can't tell if he knows anything or not.

      This is shortly before I stopped reading:

      Let me explain: on average, illustrators don't make any money. They are already one of the most immiserated, precarized groups of workers out there. They suffer from a pathology called "vocational awe." That's a term coined by the librarian Fobazi Ettarh, and it refers to workers who are vulnerable to workplace exploitation because they actually care about their jobs – nurses, librarians, teachers, and artists.

      This kind of fetishization of certain careers (which also explicitly gives jobs that today tend to be disproportionately female greater moral value than that of a bricklayer or lineman or what not) perplexes me. I know more than a few nurses and a lot of teachers (and librarians). They're absolutely like everyone else. Some of them care and some of them don't give a fuck. I've seen first grade teachers who just don't give a fig about their students and first grade teachers who got burned out because they cared so much. The nurses part is particularly funny to me. Artists--well, most of the artists and musicians who are professionals ended up their became they love their art...not their careers. People are people.

      • Your data on teachers is a bit off, at least in the US. Look up median engineering vs K-12 teacher salaries. Not mean. Median. You have to account for the fact that the teachers get around 4 solid months of vacation, good benefits, high job security and ironclad retirement. Accounting for that stuff, their salaries lag behind engineers but only by a little bit.

        Everyone on slashdot knows the stories of fresh cs grads being offered insta-millionaire amounts of money. Those are extreme outliers.

        Teachers
        • I didn't say anything about teacher salaries (though I do agree with you that the "underpaid teacher" thing is a BIT overblown). Where I live, public school teachers make between ~$50k (brand new, right out of college) and $85k (masters/doctorate, 25+ years experience). Not a bad salary, but not good either. If you are making ~70k and you want to have a house and a kid (or kids), you will really need a second salary.

          Teaching IS hard work, but so are many jobs.

  • Because I've been using LLMs for over a year now in my dev work. There is just not way at all that any of this can replace what I do. Like, every time I switch context I have to essentially re-contextualize the LLM for that new project in order to get any use out of it. That takes a lot of time and it can be frustrating. If you don't do that then the LLM is going to be giving you very bad code advice.

    • Start using instructions files everywhere. The AI is pretty good at generating them. But itâ(TM)s an evolving problem. Every day I find a new thing I need to add to my instructions file because the AI made a bad assumption.

    • by CAIMLAS ( 41445 )

      I keep seeing devs complain about how bad LLMs are and then consistently mention some stupid process they undertake that is counterproductive and time consuming, and why the models and tools are bad.

      Maybe it's a user problem?

      High schoolers are shipping fully featured projects. People who've never coded on a large code base are building enterprise projects with good architecture.

      • High schoolers are shipping fully featured projects. People who've never coded on a large code base are building enterprise projects with good architecture.

        No they aren't

      • by gweihir ( 88907 )

        Maybe it's a user problem?

        It is. As in the users are so bad at coding that an LLM can appear to be better. Competent coders do mostly without and do better that way.

        High schoolers are shipping fully featured projects. People who've never coded on a large code base are building enterprise projects with good architecture.

        No. That is not happening. It is a pure fabrication, a direct lie.

        • by CAIMLAS ( 41445 )

          Meanwhile, there are quite a few verifiable good coders - eg. Eric S. Raymond, I'm sure you know of him - who are doing quite well with 'vibe coding' and producing quality code.

          As someone who's spent most of his career reviewing others code related to performance and race conditions in C and Java, the biggest issues I've found with code produced by frontier models is that you still need to know what you're doing to tell the model what to do. The code itself is as good as any I've reviewed, and often better.

    • by gweihir ( 88907 )

      Well, and then there is this: https://mikelovesrobots.substa... [substack.com]
      It seems use of LLMs significantly increases what a developer must be able to do and the efficiency gains may not even be there, after all of that.

  • ...story at 11.

    Seriously though, we seem to be flatlining with what agentic AI can handle. The new models are...fine. They are marginally better than the last, but replacement level? No. Not in software dev, not in legal, not in medical, not in anything. Useful yes, replacement no.

  • They mean they have a horse's ass?

  • Seldom does Claude ask clarifying questions of an ambiguous request. Sure, you should try to be specific, but sometimes statements can be interpreted differently than intended. In those cases, Claude just merrily chugs along making vast code changes on false assumptions.

  • https://www.theguardian.com/us... [theguardian.com] In automation theory, a “centaur” is a person who is assisted by a machine. Driving a car makes you a centaur, and so does using autocomplete. A reverse centaur is a machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.
  • > Human software engineers and AI are currently in a "centaur phase" -- a reference to the mythical half-human, half-horse creature, where the combination outperforms either working alone -- but the window may be "very brief,"

    A more apt methaphor is the Sirens from Greek mythology. Once we get all our information through AI then we'll be forever spellbound:

    "Come hither, O weary toiler, and lay thy burdens upon us!
    We shall plan thy week with unerring wisdom,
    draft thy words in flawless array,
    summa
  • He drew on chess as precedent: 15 to 20 years ago, a human checking AI's moves could beat a standalone AI or human, but machines have since surpassed that arrangement entirely.

    Chess programs got better because the machines running chess programs got Moore's Law better and Denard-scaling faster, so the same costing machine could search deeper.

    Machines running LLMs are not getting better at the same double-every-18-months rate that we had until about 2012. Moore's Law for storage and gates is not quite dead

  • Horses can run faster than humans, but a human and a horse together are better at going where human wants to go than either of them are by themselves.

  • I really don't see how prompting can produce a decent and original game without you spending hours prompting it. I mean just to get levels and gameplay a certain way would like a week or two of prompting nightmares. It's a lot easier to learn to code and use a game engine.

  • At this time many (but very much not all) coders are in delusion and denial. Yes, LLMs may occasionally help with really simple stuff, but, seriously, that is all stuff code generators, libraries, etc. could also have done and done without the risks. Any advanced stuff is out of reach for LLMs and will remain so. And LLM will quite likely go away or be reduced massively anyways, because there is a very strong mismatch between usefulness and cost to operate and maintain them.

    The bottom line is that the only

    • > But amateur level will cut it less and less.

      That's an optimistic outlook. Here's another one: the software inustry will continue its rapid decline in output quality with ever greater speed, reaching new lows never before thought possible. Nobody wants to pay for quality or reliability anymore because everybody else is getting away with selling garbage as well.
  • He is just wrong about who will be surpassed. Considering that AI is based on human output (until some time ago), and that now most open source repos are consisting of low quality AI shit [github.com], AI will be feeded with crapy code generated by AI. So, basically, AI coding is at its peak right now.
  • In other news, bald men claim to be more virile and men with big feet claim to be hung like a horse.

  • Unbelievable how the news suck up every little drop these hyper-greedy-for-VC pied-pipers spit out.
    Just imagine that, the guy working on his doomsdaydevice keeps telling us how fearsome it is and his doomsdaydevice is huge, in fact people have been telling him his doomsdaydevice is the biggest and most fearsome and will replace us all!! In fact, it will replace everyone twice and the much hated coders they depend on even three times over!!! p.s.: send more monies plxplx

  • by TGK ( 262438 ) on Tuesday February 17, 2026 @11:29AM (#65994200) Homepage Journal

    Everyone is so excited about not having to pay software engineers to write code that they've forgotten what engineers actually do. It's less common in the software world but go find a civil engineer or an electrical engineer or an aerospace engineer and follow them around for a week.

    At some point, there's going to be a document in front of them laying out how something is going to be built and they're going to be asked to approve it. And when they do that they're taking responsibility for the design. If it falls down, if it catches on fire, or if it crashes into the mountains and kills people, they're the name on the form saying that won't happen. They're responsible.

    Claude 4.5 Opus is very impressive, but if it writes a software application that kills people it can't take responsibility. It can't be punished. It can't even really be sued.

    I just don't see how we, as a society, can trust fundamentally unaccountable entities to build systems that can do real harm if they go wrong. I suppose the alternative is that Anthropic accepts full legal liability for everything its models do. Their unwillingness to make that move tells you all you probably need to know about their own internal confidence in those models.

COMPASS [for the CDC-6000 series] is the sort of assembler one expects from a corporation whose president codes in octal. -- J.N. Gray

Working...