Forgot your password?
typodupeerror
AI Programming Software

Software Developers Say AI Is Rotting Their Brains (404media.co) 56

An anonymous reader quotes a report from 404 Media: On Reddit, Hacker News and other places where people in software development talk to each other, more and more people are becoming disillusioned with the promise of code generated by large language models. Developers talk not just about how the AI output is often flawed, but that using AI to get the job done is often a more time consuming, harder, and more frustrating experience because they have to go through the output and fix its mistakes. More concerning, developers who use AI at work report that they feel like they are de-skilling themselves and losing their ability to do their jobs as well as they used to.

"We're being told to use [AI] agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same," a UX designer at a midsized tech company told me. 404 Media granted all the developers we talked to for this story anonymity because they signed non-disclosure agreements or because they fear retribution from their employers. "We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now...)."
"I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code," the software developer at a small web design firm told 404 Media. "It's making me dumber for sure," the fintech software developer added.

"It's like when we got cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing 'thinking' in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before."

A software engineer at the FAANG said: "When I was using it for code generation, I found myself having a lot of trouble building and maintaining a mental model of the code I was working with. Another aspect is that I joined late last year and [the company's] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that."

Software Developers Say AI Is Rotting Their Brains

Comments Filter:
  • by fahrbot-bot ( 874524 ) on Wednesday May 13, 2026 @05:12PM (#66142317)

    "It's like when we got cellphones and stopped remembering phone numbers, ...

    Or home phones with speed-dial.

    You let some one/thing do tasks for you and you eventually forget how to do them yourself.

  • Use it or lose it (Score:4, Insightful)

    by Himmy32 ( 650060 ) on Wednesday May 13, 2026 @05:21PM (#66142323)

    Knowledge in general is use it or lose it. I remember my grandpa showing me how to use a slide rule and a lookup tables in books. And waxing about how his coworkers were worried that calculators were going to rot brains. Tools even in math have shifted where knowledge is needed even farther in the past, like stats packages, Maple, or Wolfram Alpha.

    What's scary here is that the need for knowledge isn't shifted, just outsourcing the practice.

    • Sadly, I have forgotten how to use a slide rule, though my old slipstick is still sitting at the back of the bookshelf near my computer. Probably covered with dust, though.

      I do still use my abacus occasionally, but not "as designed". It's handy as all get-out for binary arithmetic and tracking bit flipping. Which isn't what an abacus is for, of course, but that's what I use it for.

      • by dskoll ( 99328 )

        I still remember how to use a slide rule (for multiplication, anyway...) even though I haven't actually used one in decades.

        It's all just logarithms. Logarithms turn multiplication into addition.

    • by ffkom ( 3519199 )
      The problem with LLMs is not that people are losing the ability to do one or another specific thing, the problem is that they are unlearning to think as a whole. If you practiced juggling or playing piano at some point in your life, and later, due to a lack of opportunity to practice, unlearned it, it may be a pity, but no big deal, and you can still move your body to do other things. But if you stop moving your limbs entirely, even if only for months, the muscles atrophy, and you will have a hard time doin
    • by gweihir ( 88907 )

      Throwing out slide rules was a pretty expensive mistake. As a competent slide-ruler user you do not make mistakes in the ranges of orders of magnitude. As a calculator user, that is a main risk. Does not mean you always have to use a slide-rule, but if it were still taught, you would have a fast way to check calculations with a different tool and stay in practice with very little effort.

    • by dskoll ( 99328 )

      I think there's a big difference between calculators and AI. Calculators made doing arithmetic much easier. But arithmetic is just rote; there's no creativity involved. If you are asked for the product of 59 * 74, you're going to get 4366 if you do it correctly, whether you do it in your head, on paper, or with a calculator. And if you do without a calculator, you're still going to follow a rote algorithm.

      Software development is different. Writing a piece of software requires creativity, IMO, for all

  • by Somervillain ( 4719341 ) on Wednesday May 13, 2026 @05:32PM (#66142335)
    I have to scrutinize pull requests much moreso than ever before. I have a handful of coworkers who like to let Claude do everything...which honestly isn't a concern if they test it and write the tests themselves and understand what it does. However, I have had to reject several PRs because they were having AI writing the tests AND the code. Obviously AIs are prone to write unit tests that justify their behavior, not the actual intended function of the code.

    There's a temptation to let Claude do everything...but when I've tried it, I had to edit it heavily. Usually the code it produced was unprofessional or didn't even resemble working. However, it did help me out a few times with libraries I've never used before. I just am very careful about writing my own unit tests and verifying end to end. Additionally, I've been lazy and just pointed Claude at a stacktrace and ask it to tell me why it was failing (a project I'm unfamiliar with). It failed 100% of the time. In fairness, so did I...they were tricky bugs...I had to contact the author and have him explain what he intended to do. It's ability to understand code is really lacking....whereas that should be it's greatest strength.

    I am an AI realist. I give it credit where it works and complain where it's overhyped. I have multiple AI evangelists on my team. For them, it's a religion...do everything in AI...AI is all powerful. To me, it's a tool in my toolbox.

    The difference between us is that I see AI as it is today....their vision is AI as they imagine it...based on sci fi books and movies. In their vision, Claude is smart and knows what it's doing and will guide you to the promised land with a layover in nirvana and bliss. All hail AI!!!!

    The disturbing part is they seem to have noticeably regressed and believe Claude over their own judgment.
  • People getting fired because the managers guarantee vibe coding works. Meanwhile I order coffee from an app that had worked fine for years and ended up waiting half an hour before needing to find an email proving I even paid at all, which the baffled store employees told me looked like an Uber Eats delivery that had been delivered. But hey the vibe recoded app gave my money to the company and that's the important thing!
    • by ffkom ( 3519199 )

      People getting fired because the managers guarantee vibe coding works.

      And even when they notice that vibe coding does not work that great, they will still try to move expenses away from wages towards tokens paid to some LLM hoster. And once they find out how expensive that gets over time... well, they probably have been replaced by LLMs themselves at that time.

    • by gweihir ( 88907 )

      Indeed. Once again non-tech personnel thinks it knows how tech works and can make competent decisions about it. All that shows is that software engineering is a very immature discipline and that the "managers" are still (as they always were) generally really bad at their jobs. Imaging a "manager" telling a construction engineer that a bridge will definitely take a certain load when the engineer knows that is not true. What would happen is that the engineer escalates or quits. Non-tech personal cannot make c

  • by rezachi ( 10503306 ) on Wednesday May 13, 2026 @05:42PM (#66142349)
    I've used AI agents to assist with troubleshooting some IT issues. And while it did eventually get me there, there were two glaring problems I've found: * On issues where I was familiar with the system, it would make wacky suggestions or tie things together as being the same root cause even when it was an impossibility. You could waste a lot of time going down these rabbit holes if you didn't know what you were doing. * On issues where I was less familiar, I found that after spending hours troubleshooting the system, I arrived at the answer but had not gained any knowledge on the methodology of how the system worked or how the troubleshooting plan was determined. You never get to be a senior level contributor without this kind of knowledge. So it worked, but it would really depend on the goals of the organization as to whether this was a direction they really want to go.
    • by hjf ( 703092 )

      i mean that's not a bad thing either. I sometimes DO NOT want to learn "new to me" things. I've been contributing to an ancient, but still used software called Xastir. It's VERY OLD spaghetti code, low level X11 with Motif. I DO NOT want to learn Motif. It's not a marketable skill or something I'll ever need. But I let the AI code a few contributions (one of them was replace some parts with Cairo fonts for antialias in high dpi scerens, and the other was fixing a very old screen drawing routine that took 2-

  • I received my first AI-generated pull request recently. It was... not great. A lot of extra code that was not necessary at all, some odd naming conventions, and the size of it all made the whole change set difficult to parse. This wasn't a typical "Well, this works and it's okay, it's just not the way I would do it." Some sections were legitimately terrible.

    I have been using AI tools somewhat, but mostly to examine existing structures and answer questions. It's pretty good at that. But the code? I prefer to

  • There's no way to evaluate whether that much code is well-written or secure

    sorry? then you're not doing your job.

    pre llm developers didn't remember to do e.g. asm system calls either, and that's not brain rot but abstraction. llms introduce a whole new level of abstraction but it's non-deterministic, so you can use as much llm as you like but you still have to do your job. if you don't do your job then you simply aren't a software developer, you're a vibe coder.

    and vibe coders are fine, they can do damn cool stuff, but they aren't software developers and shouldn't be discussing ab

  • If using an LLM for coding is rotting your brain, then you likely were never using your brain, you were simply translating a requirement from one human language into software. That's accounting, not creating, and your brain has been rotting the entire time.

    Seriously. Software 'development' is little more than acting as a human requirements compiler, and that ship has sailed. Engineers - of any discipline - applying math & developing algorithms - is an endeavor that takes far more than 'software devel

    • by ffkom ( 3519199 )
      Your "no true Scotsman" argument isn't compelling. If people used LLMs only to "lookup facts" or to "translate one language into another", there would be way less reason for concern (well, they would still need to retain the ability to check the results, but that is another topic). But people use LLMs for everything, en masse, from "applying math & developing algorithms" to asking what time it is and what to eat today.
    • Who is so narrow-minded they believe language  translators are accountants? Surely not the people who translated  hieroglyphs ... Sanskrit ... cuneiform ... Linear-B etc.  Or who translate English into Russian during a nuclear confrontation ( we pray ...) .  So who then? Only  accountants ?
    • by gweihir ( 88907 )

      And in actual reality, LLMs cannot do "requirements compilation". That one requires General Intelligence.

  • "There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same, ... We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now...)

    How did you not build a rat's nest BEFORE AI?

    AI increases output: it magnifies existing issues, but it does not magically create new ones. I strongly suspect when you had hundreds of other programmers

    • by gweihir ( 88907 )

      That is really nonsense. With actual intelligence you get better at things and the tech debt gets smaller. With code reviews you do not only evaluate the code but the coder. Not all juniors turn into competent coders and you steer them into other paths.

      None of that works for LLMs.

  • "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

    But now it's not a matter of being not smart enough, it's about just leaving yourself the exhausting, miserable work that never should have existed in the first place.

    • by gweihir ( 88907 )

      Indeed. As you get better as a coder, debugging may get harder but you need far less of it. LLMs killed that and, on top of that, produce "review resistant" code. I expect we will see a lot of LLM-caused burnouts in the next few years and that will reduce the number of desperately needed good coders even further.

  • AI probably shouldn't be used outside of hobbies. I've been using it for a few months now. It's let me do things which I could never do as a mediocre programmer/someone who struggles with advanced Algebra. But, I would never want to / or claim to be a software developer in a job. I've used it to add VR support to some games. Perfectly fine as a fun hobby, but there's no way I could support these programs/codebases as a paid job. I still hit issues where AI just stops working/doesn't understand certain prob
  • It's like guiding a bunch of junior devs and correcting their mistakes on steroids, all day long, every day. F*#ing exhausting!

  • Anyone complaining about the AI ruining their ability to write PHP code is not very high on the food chain. Somewhere between clams and crabs. Seriously, while AI is not good at writing difficult, new data structures code, it is very good at writing GUI code, API calls, and with suitable prompting, I can control it very well. I think this article is alarmist, silly, and the people being interviewed seem to be on a par with completion 2 in terms of IQ.
  • at my large company, we have a fantastic group
    here's how we manage all of us using AI on our monolithic code base:

    1: our jira tickets are extremely well specified, by both humans and now also vetted by AI
    2: eng instructs ai to look at jira, and make a plan.
    3: 2nd ai "critique this plan like you hate it", you end up with a much better plan
    4: create a unit tests that fail on current code but will pass when bug is fixed or feature is implemented. create as many as you need to definitively pin it down, run all

  • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday May 13, 2026 @08:07PM (#66142525) Journal

    Dude, I've been writing code for 40 years. I've used so many different tools, stacks, libraries and APIs that at this point I don't remember any of them, and I haven't remembered them for years, and it doesn't matter at all. Sure, I have to look everything up, but that's fine, that doesn't matter. What matters is that I know when something looks wrong, or hard to maintain, or inefficient, or insecure, or... pick the axis. And I can dig in and find the problem. Anyone can tell if code works, that's easy. Understanding when and why it might break or otherwise impose additional costs, that's the real skill.

    Which, as it happens, is exactly the skill you need to use an LLM effectively. Also the skill you need to understand legacy code, review colleagues' commits, etc., etc., etc. I used to say that the ability to read and understand code is an underrated skill, but an old friend corrected me at lunch a couple of weeks ago, saying that the ability to read and understand code is the most important software engineering skill, and always has been. Upon reflection, I agreed. And LLMs make this clearer than ever before.

    • +1 to this. And undue reliance on LLM's is the antithesis of being able to read and understand code, for the vast majority of LLM users. LLM's aren't designed to provide correct answers, they are designed to provide plausible answers. Wherein lies the trap.

      Bottom line IMO is that the LLM will help the good / experienced developer get things done faster, for a certain subset of problems. LLM's will hold back the inexperienced / novice developer if not actually turn them into a liability.

  • Our group has been experimenting with LLM's (I refuse to call it AI because it's no such thing) on a reasonably large and extremely complicated code base. What we're finding is that while the LLM is often right, when it's wrong, it's plausibly wrong. That's problem #1: undue dependence on the LLM weakens the group's sense of "that's not the right answer", leading to bug churn.

    Problem #2 is that a newer developer relying on the LLM's for code writing or debugging, misses out on the chance to develop that

  • But these are smart people and you can only fool them for a while. And they start to notice that something is really, badly off. Good.

  • We're being told to use [AI] agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same

    I mean, that's an easy one...use AI to do the evaluation!

    It might be funny, if bosses weren't actually demanding this!

  • Software engineers are managing Claude as a development team.
    I suspect the problem is that software engineers usually make poor managers.

  • I retired from software development in 2023. My mantra is:

    I'm so glad I retired when I did. I'm so glad I retired when I did. I'm so glad I retired when I did....

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...