Software Developers Say AI Is Rotting Their Brains (404media.co) 56
An anonymous reader quotes a report from 404 Media: On Reddit, Hacker News and other places where people in software development talk to each other, more and more people are becoming disillusioned with the promise of code generated by large language models. Developers talk not just about how the AI output is often flawed, but that using AI to get the job done is often a more time consuming, harder, and more frustrating experience because they have to go through the output and fix its mistakes. More concerning, developers who use AI at work report that they feel like they are de-skilling themselves and losing their ability to do their jobs as well as they used to.
"We're being told to use [AI] agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same," a UX designer at a midsized tech company told me. 404 Media granted all the developers we talked to for this story anonymity because they signed non-disclosure agreements or because they fear retribution from their employers. "We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now...)." "I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code," the software developer at a small web design firm told 404 Media. "It's making me dumber for sure," the fintech software developer added.
"It's like when we got cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing 'thinking' in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before."
A software engineer at the FAANG said: "When I was using it for code generation, I found myself having a lot of trouble building and maintaining a mental model of the code I was working with. Another aspect is that I joined late last year and [the company's] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that."
"We're being told to use [AI] agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same," a UX designer at a midsized tech company told me. 404 Media granted all the developers we talked to for this story anonymity because they signed non-disclosure agreements or because they fear retribution from their employers. "We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now...)." "I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code," the software developer at a small web design firm told 404 Media. "It's making me dumber for sure," the fintech software developer added.
"It's like when we got cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing 'thinking' in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before."
A software engineer at the FAANG said: "When I was using it for code generation, I found myself having a lot of trouble building and maintaining a mental model of the code I was working with. Another aspect is that I joined late last year and [the company's] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that."
Brain rot even farther back ... (Score:4, Insightful)
"It's like when we got cellphones and stopped remembering phone numbers, ...
Or home phones with speed-dial.
You let some one/thing do tasks for you and you eventually forget how to do them yourself.
Re: (Score:2)
If you let yourself forget, sure.
I've been using electric arcs to start gas grills for a long time now, but I still know how to use a match.
Re: Brain rot even farther back ... (Score:3)
It depends on the complexity... Safety matches are an ultra-simplified procedure. I was taught to use flintstone but have forgotten how to.
Re: (Score:2)
Re: (Score:2)
I've been using electric arcs to start gas grills for a long time now, but I still know how to use a match.
Technically, using a match and remembering numbers are probably different kinds of memory.
Re: (Score:3)
Re:You're doing it wrong (Score:4, Interesting)
I started using gemini and found it's far better than my best employee ever was.
My best employee was very very good, but I'd have to wait a day to see results of the meeting.
One thing he (best employee) did that AI can't do is make good judgement calls. No question there.
However, when the AI spits out a half day's work in 10 seconds, it allows me the analyst/designer/project manager to rapidly analyze the output, and do another iteration of design ideas, immediately, or as fast as I can analyze process and respond.
So I can get dozens of turnarounds per day compared to even a good employee.
Working in small logical work units yields very good results. I haven't rolled up my sleeves and done any 12 hour days of deep concentration on code for years, and I don't need to. I have much knowledge and can review code but I don't need to double check syntax or look for typos, the grunt work.
I don't think that I'm losing anything, I do the architecture and design. I think I'm getting huge value and speed from gemini... the key to me is that I work at mid to high levels of abstraction, work in small logical units, review the output, and let the tool worry about the grunt work. I work as a product designer, it works as a coder. My designs are improving significantly from having the AI critique my designs and suggest various possible improvements or how to use tools that I did not know about. I don't need to code. Caveats are that I am not building mission critical or real time software. The reality is maintenance is a dead concept. As the coding agents/models improve, you can conceivably drop your whole codebase into the NEXT better model every time a better model comes out, and it will do the optimizations and grunt work.
Don't hate me. I can see the future and it is grim for people, coders, entry level people. But YOU WILL USE AI for coding is here for non mission critical applications. It's sad but true to say that "quality" is a quaint and outdated concept.. (like privacy).. good enough is today's "quality". Don't shoot the messenger, but barely working, is still working. if it don't work replace it, don't maintain it.
There will always be a need for true experts, good designers, but the writing is on the wall, AI IS REPLACING all junior functions at this time. If you are doing a web based database system, pfft, it barely matters if there is a bug.. I regret that statement but I feel it's today's reality.
Re: (Score:2, Interesting)
Re: (Score:3)
It might be a red flag if you want them to be focussed on babysitting the probabilistic code generator, but if you want an actual developer who can think through a problem on their own, a lack of AI usage in their studies is a huge benefit.
Re: (Score:2)
Agree. Gemini and Claude are both super useful, so long as they are used properly. I haven't had as much luck with other models, so I stick with these two.
But how you use them, and how much you use them, depends greatly on the nature of your project. It still requires intelligence and skill to use them well, and if you use them poorly the results will burn you. And for some specific parts of a total solution, you simply can't use them, and will need to do those parts yourself. And it is on you to recog
Re: (Score:2)
"There will always be a need for true experts, good designers, but the writing is on the wall, AI IS REPLACING all junior functions at this time."
The irony in this statement is so rich: where do you suppose true experts and good designers come from? They're made, not born, and the source material for an expert developer is a junior developer. Using AI to eliminate entry level developer positions is NOT a sustainable course. But it does serve the hypercapitalist masters who have created AI to serve their own
Re: You're doing it wrong (Score:2)
"think I'm getting huge value and speed from gemini... the key to me is that I work at mid to high levels of abstraction, work in small logical units, review the output, and let the tool worry about the grunt work. I work as a product designer, it works as a coder. My designs are improving significantly"
pretty much my experience. i ran into the indeterministic behavior/context window issues very early on and modified my methodology. i do small, discrete pieces, always add existing schemas/specs etc so Cha
Use it or lose it (Score:4, Insightful)
Knowledge in general is use it or lose it. I remember my grandpa showing me how to use a slide rule and a lookup tables in books. And waxing about how his coworkers were worried that calculators were going to rot brains. Tools even in math have shifted where knowledge is needed even farther in the past, like stats packages, Maple, or Wolfram Alpha.
What's scary here is that the need for knowledge isn't shifted, just outsourcing the practice.
Re: (Score:1)
I used a 48sx in collage and it has one hell of a learning code. You had to know how to use it which is not simple and you had to know how to translate albergaic entry to rpn.
Re: (Score:1)
Sadly, I have forgotten how to use a slide rule, though my old slipstick is still sitting at the back of the bookshelf near my computer. Probably covered with dust, though.
I do still use my abacus occasionally, but not "as designed". It's handy as all get-out for binary arithmetic and tracking bit flipping. Which isn't what an abacus is for, of course, but that's what I use it for.
Re: (Score:2)
I still remember how to use a slide rule (for multiplication, anyway...) even though I haven't actually used one in decades.
It's all just logarithms. Logarithms turn multiplication into addition.
Re: (Score:2)
Re: (Score:2)
Throwing out slide rules was a pretty expensive mistake. As a competent slide-ruler user you do not make mistakes in the ranges of orders of magnitude. As a calculator user, that is a main risk. Does not mean you always have to use a slide-rule, but if it were still taught, you would have a fast way to check calculations with a different tool and stay in practice with very little effort.
Re: (Score:2)
I think there's a big difference between calculators and AI. Calculators made doing arithmetic much easier. But arithmetic is just rote; there's no creativity involved. If you are asked for the product of 59 * 74, you're going to get 4366 if you do it correctly, whether you do it in your head, on paper, or with a calculator. And if you do without a calculator, you're still going to follow a rote algorithm.
Software development is different. Writing a piece of software requires creativity, IMO, for all
We see this problem + AI is a tool, not a religion (Score:5, Insightful)
There's a temptation to let Claude do everything...but when I've tried it, I had to edit it heavily. Usually the code it produced was unprofessional or didn't even resemble working. However, it did help me out a few times with libraries I've never used before. I just am very careful about writing my own unit tests and verifying end to end. Additionally, I've been lazy and just pointed Claude at a stacktrace and ask it to tell me why it was failing (a project I'm unfamiliar with). It failed 100% of the time. In fairness, so did I...they were tricky bugs...I had to contact the author and have him explain what he intended to do. It's ability to understand code is really lacking....whereas that should be it's greatest strength.
I am an AI realist. I give it credit where it works and complain where it's overhyped. I have multiple AI evangelists on my team. For them, it's a religion...do everything in AI...AI is all powerful. To me, it's a tool in my toolbox.
The difference between us is that I see AI as it is today....their vision is AI as they imagine it...based on sci fi books and movies. In their vision, Claude is smart and knows what it's doing and will guide you to the promised land with a layover in nirvana and bliss. All hail AI!!!!
The disturbing part is they seem to have noticeably regressed and believe Claude over their own judgment.
Re:We see this problem + AI is a tool, not a relig (Score:5, Insightful)
I have to scrutinize pull requests much moreso than ever before
The disturbing part is they seem to have noticeably regressed
And think this is core to the discussion because output from evangelists is going up while hollowing out the skills needed for the next generation to do the review.
You're fired cause the manager says it works (Score:2)
Re: (Score:2)
People getting fired because the managers guarantee vibe coding works.
And even when they notice that vibe coding does not work that great, they will still try to move expenses away from wages towards tokens paid to some LLM hoster. And once they find out how expensive that gets over time... well, they probably have been replaced by LLMs themselves at that time.
Re: (Score:2)
Indeed. Once again non-tech personnel thinks it knows how tech works and can make competent decisions about it. All that shows is that software engineering is a very immature discipline and that the "managers" are still (as they always were) generally really bad at their jobs. Imaging a "manager" telling a construction engineer that a bridge will definitely take a certain load when the engineer knows that is not true. What would happen is that the engineer escalates or quits. Non-tech personal cannot make c
It stops the development of new knowledge too (Score:5, Insightful)
Re: (Score:2)
i mean that's not a bad thing either. I sometimes DO NOT want to learn "new to me" things. I've been contributing to an ancient, but still used software called Xastir. It's VERY OLD spaghetti code, low level X11 with Motif. I DO NOT want to learn Motif. It's not a marketable skill or something I'll ever need. But I let the AI code a few contributions (one of them was replace some parts with Cairo fonts for antialias in high dpi scerens, and the other was fixing a very old screen drawing routine that took 2-
Those Pull Requests (Score:2)
I received my first AI-generated pull request recently. It was... not great. A lot of extra code that was not necessary at all, some odd naming conventions, and the size of it all made the whole change set difficult to parse. This wasn't a typical "Well, this works and it's okay, it's just not the way I would do it." Some sections were legitimately terrible.
I have been using AI tools somewhat, but mostly to examine existing structures and answer questions. It's pretty good at that. But the code? I prefer to
that's your problem right there: (Score:2)
There's no way to evaluate whether that much code is well-written or secure
sorry? then you're not doing your job.
pre llm developers didn't remember to do e.g. asm system calls either, and that's not brain rot but abstraction. llms introduce a whole new level of abstraction but it's non-deterministic, so you can use as much llm as you like but you still have to do your job. if you don't do your job then you simply aren't a software developer, you're a vibe coder.
and vibe coders are fine, they can do damn cool stuff, but they aren't software developers and shouldn't be discussing ab
The Developer is dead - long live The Engineer (Score:1)
If using an LLM for coding is rotting your brain, then you likely were never using your brain, you were simply translating a requirement from one human language into software. That's accounting, not creating, and your brain has been rotting the entire time.
Seriously. Software 'development' is little more than acting as a human requirements compiler, and that ship has sailed. Engineers - of any discipline - applying math & developing algorithms - is an endeavor that takes far more than 'software devel
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
And in actual reality, LLMs cannot do "requirements compilation". That one requires General Intelligence.
People Seem to Forget Problems Existed Pre-AI (Score:2)
"There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same, ... We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now...)
How did you not build a rat's nest BEFORE AI?
AI increases output: it magnifies existing issues, but it does not magically create new ones. I strongly suspect when you had hundreds of other programmers
Re: (Score:2)
That is really nonsense. With actual intelligence you get better at things and the tech debt gets smaller. With code reviews you do not only evaluate the code but the coder. Not all juniors turn into competent coders and you steer them into other paths.
None of that works for LLMs.
Brian Kernighan nailed this decades ago (Score:2)
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
But now it's not a matter of being not smart enough, it's about just leaving yourself the exhausting, miserable work that never should have existed in the first place.
Re: (Score:2)
Indeed. As you get better as a coder, debugging may get harder but you need far less of it. LLMs killed that and, on top of that, produce "review resistant" code. I expect we will see a lot of LLM-caused burnouts in the next few years and that will reduce the number of desperately needed good coders even further.
Thoughts on AI (Score:2)
Re: Holding it wrong (Score:2)
In properly engineered systems, only about 10% of the work is rote coding.
So anyone who 10x's their output using an LLM should be fired.
Re: (Score:2)
That is a misuse of rote work. Rote work is to allow junior devs to get into things and develop a general feel for things. If you are not slowly educating junior devs, you (or rather your organization) is doing it wrong.
As to "research new solutions", absolutely not. LLMs are really bad at giving necessary context, limitations, caveats an the like. At most, you should use an LLM for better finding of actual information sources.
Duh (Score:2)
It's like guiding a bunch of junior devs and correcting their mistakes on steroids, all day long, every day. F*#ing exhausting!
Laravel API? Not possible to go lower (Score:2)
the opposite is true for me (Score:2)
at my large company, we have a fantastic group
here's how we manage all of us using AI on our monolithic code base:
1: our jira tickets are extremely well specified, by both humans and now also vetted by AI
2: eng instructs ai to look at jira, and make a plan.
3: 2nd ai "critique this plan like you hate it", you end up with a much better plan
4: create a unit tests that fail on current code but will pass when bug is fixed or feature is implemented. create as many as you need to definitively pin it down, run all
Forgot how to implement a Laravel API... (Score:4, Insightful)
Dude, I've been writing code for 40 years. I've used so many different tools, stacks, libraries and APIs that at this point I don't remember any of them, and I haven't remembered them for years, and it doesn't matter at all. Sure, I have to look everything up, but that's fine, that doesn't matter. What matters is that I know when something looks wrong, or hard to maintain, or inefficient, or insecure, or... pick the axis. And I can dig in and find the problem. Anyone can tell if code works, that's easy. Understanding when and why it might break or otherwise impose additional costs, that's the real skill.
Which, as it happens, is exactly the skill you need to use an LLM effectively. Also the skill you need to understand legacy code, review colleagues' commits, etc., etc., etc. I used to say that the ability to read and understand code is an underrated skill, but an old friend corrected me at lunch a couple of weeks ago, saying that the ability to read and understand code is the most important software engineering skill, and always has been. Upon reflection, I agreed. And LLMs make this clearer than ever before.
Re: (Score:2)
+1 to this. And undue reliance on LLM's is the antithesis of being able to read and understand code, for the vast majority of LLM users. LLM's aren't designed to provide correct answers, they are designed to provide plausible answers. Wherein lies the trap.
Bottom line IMO is that the LLM will help the good / experienced developer get things done faster, for a certain subset of problems. LLM's will hold back the inexperienced / novice developer if not actually turn them into a liability.
As often the case, can be good if used properly (Score:2)
Our group has been experimenting with LLM's (I refuse to call it AI because it's no such thing) on a reasonably large and extremely complicated code base. What we're finding is that while the LLM is often right, when it's wrong, it's plausibly wrong. That's problem #1: undue dependence on the LLM weakens the group's sense of "that's not the right answer", leading to bug churn.
Problem #2 is that a newer developer relying on the LLM's for code writing or debugging, misses out on the chance to develop that
Such a surprise (Score:2)
But these are smart people and you can only fool them for a while. And they start to notice that something is really, badly off. Good.
No way to evaluate? (Score:2)
We're being told to use [AI] agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same
I mean, that's an easy one...use AI to do the evaluation!
It might be funny, if bosses weren't actually demanding this!
Dumb headline (Score:2)
Software engineers are managing Claude as a development team.
I suspect the problem is that software engineers usually make poor managers.
My mantra (Score:2)
I retired from software development in 2023. My mantra is:
I'm so glad I retired when I did. I'm so glad I retired when I did. I'm so glad I retired when I did....