Forgot your password?
typodupeerror
Programming

Claude Code is the Inflection Point (semianalysis.com) 69

About 4% of all public commits on GitHub are now being authored by Anthropic's Claude Code, a terminal-native AI coding agent that has quickly become the centerpiece of a broader argument that software engineering is being fundamentally reshaped by AI.

SemiAnalysis, a semiconductor and AI research firm, published a report on Friday projecting that figure will climb past 20% by the end of 2026. Claude Code is a command-line tool that reads codebases, plans multi-step tasks and executes them autonomously. Anthropic's quarterly revenue additions have overtaken OpenAI's, according to SemiAnalysis's internal economic model, and the firm believes Anthropic's growth is now constrained primarily by available compute.

Accenture has signed on to train 30,000 professionals on Claude, the largest enterprise deployment so far, targeting financial services, life sciences, healthcare and the public sector. On January 12, Anthropic launched Cowork, a desktop-oriented extension of the same agent architecture -- four engineers built it in 10 days, and most of the code was written by Claude Code itself.
This discussion has been archived. No new comments can be posted.

Claude Code is the Inflection Point

Comments Filter:
  • by Mr. Dollar Ton ( 5495648 ) on Saturday February 07, 2026 @08:22AM (#65974366)

    in "AI" land if someone's paying for these ridiculous advertisements.

    • by burtosis ( 1124179 ) on Saturday February 07, 2026 @09:51AM (#65974462)

      in "AI" land if someone's paying for these ridiculous advertisements.

      Yep, it’s like saying all products have been designed by AI for 50 years because humans have been using CAD models, CNC, and database like inventories. Without humans in the loop none of them would useful at all, just like coding with any “AI” agents so far through a single prompt. The bubble surface tension is holding in a critical amount of hot air and the upwards pressure is causing the surface to erode and evaporate even faster which is why news outlets are reporting on the AI stock sell off as fears over AI replacing all coding tasks because someone has a need they can vaguely describe. It’s a multiplier of human productivity, not the dream of end stage capitalists who want to replace and dispose of the working class.

      • by BitCortex ( 8543853 ) on Saturday February 07, 2026 @12:51PM (#65974694)

        Yep, it’s like saying all products have been designed by AI for 50 years because humans have been using CAD models, CNC, and database like inventories. Without humans in the loop none of them would useful at all, just like coding with any “AI” agents so far through a single prompt.

        Exactly, and it goes back much farther. The first assemblers were seen as the end of programming, and then compilers, structured languages, 4GLs, expert systems, CASE tools, model-driven approaches, low-code platforms, visual app builders, etc. People have been predicting the end of programming since literally the beginning of programming.

        It’s a multiplier of human productivity, not the dream of end stage capitalists who want to replace and dispose of the working class.

        Dammit, you get it!

      • by gweihir ( 88907 ) on Saturday February 07, 2026 @05:25PM (#65975032)

        It’s a multiplier of human productivity, not the dream of end stage capitalists who want to replace and dispose of the working class.

        From current evidence, it is not even that. It seems to waste time while producing worse results: https://mikelovesrobots.substa... [substack.com]

        I predict that all AI code will really do is create a mountain of technological debt that will strangle quite a bit of the industry that fell for the scam.

    • by caseih ( 160668 )

      Have you actually used Claude Code?

      • Re: (Score:3, Insightful)

        Yes, it is not really much different from any other garbage generator out there.

    • by gweihir ( 88907 )

      Indeed. It funny to see how the promises are getting smaller and smaller and still are only for the future. If 10% of the empty promises (typically called "predictions") made about LLMs in the past had come true at the time they were promised to become true, we would all be out of jobs by now and all software would be written by AI. Instead, the evidence that AI code is a serious problem and a severe liability is slowly mounting.

  • by simlox ( 6576120 ) on Saturday February 07, 2026 @09:05AM (#65974404)
    And make inexperienced ones produce more crap. I tried it a little and it could really speed things up, but it is like outsourcing to junior developer, except it is much faster and cheaper. In general it is good at scrabing examples, produce templates code, fixing some bugs, but it makes quite a few mistakes.
    • but it is like outsourcing to junior developer, except it is much faster and cheaper

      When people say, "I outsourced to a junior developer" it's a strong sign they are talking BS.

      No one ever talked about outsourcing to a junior developer before AI code was a thing.

    • by Carcass666 ( 539381 ) on Saturday February 07, 2026 @10:25AM (#65974492)

      And make inexperienced ones produce more crap. I tried it a little and it could really speed things up, but it is like outsourcing to junior developer, except it is much faster and cheaper. In general it is good at scrabing examples, produce templates code, fixing some bugs, but it makes quite a few mistakes.

      Agreed. There are some things that I find pretty annoying about it that you have to tell it explicitly not to do. For example, pulling in really old versions of NPM or Python packages. It has a propensity to not know when it is correct or incorrect, or perhaps, it it is unable to share its level of uncertainty. Most of these things can be mitigated by updating and refining Claude's prompting.

      I think there are some things those of us clutching our pearls at the thought of all of the AI slop ought to keep in mind:

      • We are not all writing air traffic control systems or medical device firmware. There is an awful lot of "enterprise software" that is not much more than database CRUD and visualization. People are not going to die if that software is not using the most efficient data types, etc.
      • Will somebody five years from now pick up your AI generated software and not know what to do with it? Possibly, but mostly only if you are not taking the time to generate unit and integration testing as well as documentation, and there is now basically no excuse now to skip doing either.
      • Senior developers should be spending more of their time thinking about the architecture and use cases of their applications, and less time thinking about whether HashMap or FxHashMap is more efficient in a given scenario.
      • If we are being honest, there is already a bunch of slop in our industry. Junior (or just bad) devs copying/pasting from StackOverflow... Shortcuts taken during unreasonable deadlines... Stuff we just aren't told or don't know... Software development has been in a race to the bottom for a while now.

      Most of the furniture in my home is not hand-made by Amish carpenters, it is machine milled and partially assembled by cheap labor. It is not as good and will not last generations as hand-crafted furniture, and that is okay. For better and for worse, a lot of SMB executives look at software the same way (at least until it doesn't work), they want "good enough" software that works until the next merger or acquisition, so they can cash out and go on to the next thing. They are not intersted in paying for software built using hand-built assembler (SpinRite - we miss ya').

      The good news is that there are things we can do, and if we get good at them, AI can do the mundane bits and software engineering will still be a thing. We get better at Specification and Test driven development. We review the hell out of the code AI generates and make sure our linting and bench-marking tools are up to snuff. We get really good at authoring prompts that keeps AI tools within the guardrails. And yes, we keep AI away from the really critical stuff, at least for now...

      • by Somervillain ( 4719341 ) on Saturday February 07, 2026 @02:25PM (#65974788)

        Most of the furniture in my home is not hand-made by Amish carpenters, it is machine milled and partially assembled by cheap labor. It is not as good and will not last generations as hand-crafted furniture, and that is okay.

        Carpenters build houses. Woodworkers make furniture. I make heirloom furniture as a hobby. The machine milled stuff is superior. You can make far superior furniture with technology and tools. Your furniture is lower quality for 3 reasons:

        1. materials. Your furniture is made with particle board. My pieces are made with hardwood, carefully chosen for grain and lack of defects.
        2. assembly: Your furniture has to be shipped long distances. My pieces have only been moved from my basement to the room. Therefore, I use joinery and glue and expert technique...mass market furniture has to be moron-proof.
        3. finish: I lovingly applied finish to every portion after it was assembled and corrected mistakes. I add an additional coat or 2 for durability because my children are monsters. Your furniture had minimal finish spray applied before packaging. Commercial furniture has no incentive to give you good finish. It just has to look good for 30 days. If it looks like shit after a year, it just increased the chance you'll buy another one soon

        We are not all writing air traffic control systems or medical device firmware. There is an awful lot of "enterprise software" that is not much more than database CRUD and visualization. People are not going to die if that software is not using the most efficient data types, etc.

        NO! "Close" is noticeably inferior to good.

        1. You like data breaches? No person may die, but your business may if it has enough lawsuits from data theft from your vibe-coded garbage.

        2. You like high cloud spend and global warming? Keep writing inefficient vibe-coded slop in Python with 40,000 frameworks that add no value. Just keep spinning up new instances so you don't have to learn Java or Rust. It won't kill anyone as fast as a medical device or air traffic controller mistake...but you're harming the planet with your laziness.

        3. You like inefficiency? If you take 100,000 lines to do something you could have done in 500, it is very hard to maintain, increases the likelihood of bugs and is a perpetual cost carried by your employer. It's expensive. That money could be wasted bogging down 2 or 3 engineers or you could hire 6 delivery people...or upgrade tooling....or invest the money in a useful manner that employs more and benefits society.

        Look, if you don't take pride in your work, go sell real estate or something like that. You have no business being a programmer. I use Claude daily at work. This week, it actually helped me and only catastrophically fucked up 2x..so maybe it saved me a tiny amount of time after I fixed their mistakes? AI is overhyped, but generally neutral...neither bad nor good. However, shitty work?...that's a fucking cancer and we need some serious chemotherapy to flush it out of the industry. Just because you're doing enterprise development doesn't mean it's OK to suck at your job and suck at life.

        • by Ksevio ( 865461 )

          Good point! Developers never created insecure or inefficient code before AI so this could be a major problem

          • Well just to tack on, I've seen bad developers create mountains of bad code with these tools. Without these tools, the would be stuck writing it by hand and it would at least be shorter bad code.
        • My pieces are made with hardwood, carefully chosen for grain and lack of defects.

          I'm going to say that hardwood is not always better, since it tends to be quite annoyingly heavy.

          • ....but it's back-killer to move. My kids and I literally jump on it all the time...together...it doesn't budge. I use it as a step stool to change lights, but the solid oak fucker is HEAVY. Definitely not for everyone, but I can confidently say I'll never need to buy a new one again. I built it before kids. When my 1 yo banged a tambourine on it and put deep gashes in the top, once they were 6 and less abusive. I painfully carried it back to the basement, sanded it down, put a new coat of finish on,
          • FYI: Forgot to mention....the ikea/target/walmart wood weighs more, actually, than nearly all hardwood. Microscopically, wood looks like a bunch of straws, particle board is woodchips with resin. Resin always weighs more than wood.
            • In any case I agree with your point that craftsmanship matters, and that has been lost because people are attracted to programming for the money, not because they enjoy it. (Of course, someone who wants money can be a skilled programmer too, but often they don't).
        • I intended no offense at mixing up carpentry and woodworking, you have my deepest apologies.

          And yeah, there is plenty of Python and NodeJS apps that pulling umpteen unnecessary packages to do basic things, and suck up power/CPU/etc. And vibe coding is going to make this much worse. A plague on it all. But if you are saying that the same effort should go into something that directly affects the General Ledger versus something that somebody cobbles together to keep track of used toner cartridges, I'm not t

    • by gweihir ( 88907 )

      Does not look like it will make the experienced ones more effective. But there seems to be a strong delusion going round to that effect: https://mikelovesrobots.substa... [substack.com]

      I guess the shiny new toy is really just a toy and the shine is mostly flawed perception.

  • Four Percent? (Score:5, Insightful)

    by SlashbotAgent ( 6477336 ) on Saturday February 07, 2026 @09:11AM (#65974414)

    Hyping 4% sounds like a scammer hyping shit coins and NFTs.

    I fully expect an increase, but this article is pure spam/hype.

    • by znrt ( 2424692 )

      Hyping 4% sounds like a scammer hyping shit coins and NFTs.

      hey, it's not an ordinary 4%. it's an inflection point 4%!

      this article is pure spam/hype.

      like almost every other article here. but it pays msmash's salary so we can keep enjoying this formerly venerable news site and crack jokes at the asinine nonsense that msmash gets paid for posting these days. just relax, take it all in and feel the time passing, worlds colliding in the far distance. what's not to like?

    • 4% seems like a substantial number given how many git hosted projects there are.

    • by gweihir ( 88907 )

      Indeed. Also notice how the predictions have gotten smaller? Apparently, things like "500% coding speed increase!" are not believable anymore. The predictions are still 100% lies though.

  • In fact once you understand how to use it most effectively then it's quite impressive...BUT it still occasionally rewrites your entire code base for to fix a button or introduces environment breaking changes for a new feature etc. if you don't know how to code it's a god send.
    • by twdorris ( 29395 ) on Saturday February 07, 2026 @09:35AM (#65974444)

      if you don't know how to code it's a god send.

      I contend just the opposite. I'd be terrified to watch someone that doesn't know how to code use claude code for the very reasons you mentioned. It's going to do amazing stuff most of the time and really stupid stuff periodically. To pick up on that stupid stuff, knowing how to code and conscientiously reviewing changes with that background knowledge is the only way to get those impressive results.

      • I use claude as a tool to get things done. I’m more concerned about solving the problem than understanding with the intricacies of 8051 assembly code.

        • I’m more concerned about solving the problem than understanding with the intricacies of 8051 assembly code.

          That's not a common use case.

      • by gweihir ( 88907 )

        Indeed. This reminds me of why IBM Watson failed to make treatment plans in the medical field: It usually was somewhat better than what MDs did, but occasionally it did something really stupid that no competent MD would ever have done and killed the patient. AI "coding" is similar. It looks good, it may even work, but in the real world you need reliability and security and probably also maintainability. The errors LLM-type makes and the fact that finding subtle errors in code is very, very hard give "AI cod

        • In the real world, you rarely need small little once-off programs.

          The vast majority of all programming is small incremental refactors to an existing codebase. If you don't understand the codebase (or at least, be able to find and understand the relative parts), you can't modify it. That is what most programmers do most of the time.
    • Assuming, is bad. (Score:3, Insightful)

      by Anonymous Coward

      ..if you don't know how to code it's a god send.

      Imagine society continuing to abuse this excuse in every other aspect of your life.

      If the mechanic doesn't really know how to change the brakes on your car, it's a god send.

      If your doctor doesn't really know how to do that surgery on you, it's a god send.

      Oh, you'll be sending to a God alright. Perhaps quite prematurely.

      BUT it still occasionally rewrites your entire code base for to fix a button or introduces environment breaking changes for a new feature etc.

      Being completely dismissive of this is what enables CxOs to assume a 12-year old that can text to the AI godz faster than every seasoned programmer, is good enough to replace them. Every t

      • The difference being I’m not a comp sci major or programmer by trade. So I’m happy with what claude does for me.

    • by bjoast ( 1310293 )

      BUT it still occasionally rewrites your entire code base for to fix a button

      I think that's how it should work in many situations. Iterating on code is not strictly necessary when you have agents that can reason about and output thousands of lines of code in minutes. Any new requirement has the potential to result in architecture-breaking changes that may be best solved through a complete rewrite. What happens with the code in the implementation may not, in a completely spec-driven scenario, be relevant at all assuming no human developers are expected to work on it.

      • Only if you have really, really good integration tests, which are expensive to build, maintain and run. As AIs are making mistakes just as humans, testing is equally needed. We are not used to random errors from say compilers, but LLMs and other ML models make a lot of mistakes. They need to be verified independently. Just as humans. Large changes are really hard to verify.
    • by dj.delorie ( 3368 ) on Saturday February 07, 2026 @11:13AM (#65974576) Homepage

      In my experience... if I ask Claude to help with something that's my strong point (like my core coding), it's like training a junior programmer, and I can solve my problems faster myself. But... when I need something outside my core expertise, like helper programs, wrappers, or interfaces to other technologies I'm not familiar with, it's a very fast way to get 95% of the way there without wasting time climbing the learning curve myself for a one-time need.
      Like any tool, you have to know when to use it and when not to, and what its strengths and weak points are.

  • by Alain Williams ( 2972 ) <addw@phcomp.co.uk> on Saturday February 07, 2026 @10:25AM (#65974494) Homepage

    A rich $EvilEntity that is able to compromise Claude and get it to insert remote exploitable bugs it could do a great deal of damage. Especially if Claude is used by programmers who are inexperienced or too lazy to check its code.

    • Possibly. My thinking however is that there may be a non-negligible amount of experienced coders who would notice such shenanigans and get the word out, which would junk Claude's reputation quite quick. Implicit (unverified) trust is one of the most important foundations of social engineering exploits, but at the same time a very difficult commodity to acquire.

  • The worst metric (Score:4, Insightful)

    by thecombatwombat ( 571826 ) on Saturday February 07, 2026 @10:25AM (#65974496)

    I mean for decades every developer has known number of commits or lines of code is a horrible metric.

    I have to wonder how much it is like last year, there was a similar story on /. about how Github said something like 90% of users had tried copilot. But then you read the article and realize that means how many had clicked the link they had repeatedly thrown on the front page for every single Github user, the metric was flat out a lie.

    Claude *is* a game changer. It and tools like it are here to stay, but this can still be a hype piece, and I'm pretty sure it is.

  • “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”
  • Aside from the problems that other mention, I have found that, while it can solve a problem, and the code works, many times it doesn't choose the best solution.

    I wanted some code to edit a database table, and there was in the framework a grid control that did it practically all, with basically one hundred lines of code. But the AI didn't choose it, but some other control that required lots more of code, and the associated difficulties when modifying or expanding it. The grid control was relatively new, so p

    • Yours was the closest I could find to an actually insightful comment. Given the state of the Slashdot moderation system, I'm not surprised to find it unmoderated. In particular, I noticed a number of "insightful" comments that were shallow insights (at my best (but typically weak) level of politeness).

      My approach was via the angle of fluency, but all those searches failed until I got down to the vague level of "understanding", where your comment popped out (on the partial match). So first my areas of agreem

      • by gweihir ( 88907 )

        The thing is, reviewing code with hidden defects does not really work. This is known from finding backdoors placed by developers: It takes much longer and requires better skills than a reimplementation with trusted people. While LLM-type AI is not intelligent, it is conditioned to make code look as good as possible. In many cases that will result in well hidden defects and then you run into a similar problem.

        The thing is, a competent (!) coder will think through everything when writing code for a problem. T

        • by shanen ( 462549 )

          I don't think I made my point clearly. I'm suggesting the reading well may be a distinctive and different skill that may not be tightly couple to writing well. However I may be projecting because I think I am a much better reader than writer when it comes to natural language, both for English and Japanese. When I'm writing code, the situation was quite different... Perhaps fundamentally unfair to compare them even though we use the same verbs (in English) to describe the activities.

          • by shanen ( 462549 )

            c/was quite different/is quite different/

            Or am I still writing any code these years? Very little original code "by hand", but a moderate amount of "maintenance" work on old code.

            (And the censor mods are becoming quite redundant.)

    • by gweihir ( 88907 )

      Much more code = technological debt.

      Software is like building a house of cards. At some complexity, it will all crash down. You have to keep things as simple as possible. "KISS" is not a nice addon, it is survival critical.

  • by ZipNada ( 10152669 ) on Saturday February 07, 2026 @11:57AM (#65974622)

    I have to think that most people posting here don't use Claude for coding, or at least not recently. It is amazingly effective, and has steadily and noticeably improved month over month. From the article (which many apparently didn't bother to read);

    "Provide Claude (the CLI) an input such as a spreadsheet, a codebase, a link to a webpage and then ask it to achieve an objective. It then makes a plan, verifies details, and then executes it."

    And that's what it does. It can describe in detail what it would accomplish and write it out in a document for you to review. You can make revisions to the document directly or just tell it what to change. You can tell it to implement things in phases so you can check for correctness step by step. The code quality is usually quite good, and if there are bugs you can just paste in the error messages and it will fix them.

    This not a bad thing, it is empowering. An average programmer turns into a 10x. An excellent programmer becomes a 100x. What's not to like?

    • This not a bad thing, it is empowering. An average programmer turns into a 10x. An excellent programmer becomes a 100x. What's not to like?

      Sure, when it works...as someone using Claude 4.5 daily, it fails only slightly less than it succeeds...and I only give it simple tasks. I had to do a report with some complex math to analyze user behavior...and I don't do math often, so I was happy to send it to claude...and it spit out code that looked right, it was using outdated syntax, but that's fine.. Then I showed the output to the data scientist I was working with and she started asking questions....we walked through a few instances and....yeah..

      • >> Look, you've drank the Kool Aid

        You told Claude to "do a report with some complex math to analyze user behavior", lets see your prompts. And then you had to show "the output to the data scientist" to determine if it was correct, you couldn't figure it out on your own. I'm thinking you don't have the skill or the patience to use these tools.

        AI isn't a magic wand. You can't just wave your hands in the general direction of what you think you might want. You have to be specific and break the task down i

      • by gweihir ( 88907 )

        Look, you've drank the Kool Aid. AI is your religion and no amount of facts will dissuade from your faith in AI. I'm an atheist. When I can see what you're talking about, I'll believe it...just like every religious belief I've been told in my life.

        And that is really what is going on here: A lot of believers, a much smaller number of people that see actual reality. Incidentally, about 80% of the human race is religious. Explains a few things nicely.

        You describe one of the situations that will get very expensive for the believers in the not too distant future: A competent coder does a review and sees no problems. But things are still broken. Without that data scientist being suspicious, that whole thing might have turned out badly, probably much later.

  • And yet, most AI code developers I know professionally and personally, are switching to OpenAI's Codex because the quality of tooling and results are on par, but the costs are FAR less.
    • I've switched back and forth using Github Copilot. GPT seems kinda retarded compared to Claude 4.5. Is Codex using a different LLM?
  • LLM-type AI cannot code for shit and that will not change because it cannot change. Sure, it can fake it to a degree and simplistic boilerplate code is sometimes within reach (but it can mess up that as well), bit it really cannot replace actually competent developers.

  • I've been a game dev for 25 years, and I'm not so full of hubris to say that no LLM will ever be as good at coding as me.

    But most game code isn't public domain, so there's not going to be a lot of training on it. There's a lot of weird stuff that we do at the behest of designers that nobody would ever think to do (whether that's because it's a terrible idea or a brilliant idea is really only knowable after people play the game).

    The devil has always been in the details, and so much of good game development i

I am not an Economist. I am an honest man! -- Paul McCracken

Working...