Forgot your password?
typodupeerror
AI IT

Life With AI Causing Human Brain 'Fry' (france24.com) 78

fjo3 shares a report from France 24: Too many lines of code to analyze, armies of AI assistants to wrangle, and lengthy prompts to draft are among the laments by hard-core AI adopters. Consultants at Boston Consulting Group (BCG) have dubbed the phenomenon "AI brain fry," a state of mental exhaustion stemming "from the excessive use or supervision of artificial intelligence tools, pushed beyond our cognitive limits."

The rise of AI agents that tend to computer tasks on demand has put users in the position of managing smart, fast digital workers rather than having to grind through jobs themselves. "It's a brand-new kind of cognitive load," said Ben Wigler, co-founder of the start-up LoveMind AI. "You have to really babysit these models." [...] "There is a unique kind of reward hacking that can go on when you have productivity at the scale that encourages even later hours," Wigler said.

[Adam Mackintosh, a programmer for a Canadian company] recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application. "At the end, I felt like I couldn't code anymore," he recalled. "I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day."

BCG recommends in a recently published study that company leaders establish clear limits regarding employee use and supervision of AI. However, "That self-care piece is not really an America workplace value," Wigler said. "So, I am very skeptical as to whether or not its going to be healthy or even high quality in the long term."
Notably, the report says everyone interviewed for the article "expressed overall positive views of AI despite the downsides." In fact, a recent BCG study actually found a decline in burnout rates when AI took over repetitive work tasks.

Life With AI Causing Human Brain 'Fry'

Comments Filter:
  • Rust never sleeps. (Score:4, Interesting)

    by Pseudonymous Powers ( 4097097 ) on Monday March 30, 2026 @03:05PM (#66069332)
    I don't know how many other ways people can say that if you don't practice a skill, it will be lost.
  • I believe it (Score:5, Interesting)

    by TwistedGreen ( 80055 ) on Monday March 30, 2026 @03:15PM (#66069360)

    It is cognitively expensive to do this kind of task. Essentially you are expected to assume an alien thought process with no context or explanation. I don't think it's unique or novel form of cognitive load, though. I think it's very similar to what a therapist or psychoanalyst will do... you need a sort machine empathy to try to arrive at the same solution and judge whether it's appropriate. These people were trained as programmers but they're actually computer therapists now.

    Clearly the solution is more AI.

    • TFS (TFShittyS, I should say) is 1 anecdote of 1 dude that was cranky after having worked for 15 hours on the same thing. That 'brain fry' has fuck all to do with AI.

      The end of THS and TFA even gives data: "A BCG study of 1,488 professionals in the United States actually found a decline in burnout rates when AI took over repetitive work tasks."

  • by ebonum ( 830686 ) on Monday March 30, 2026 @03:25PM (#66069372)

    It might take one person one year to write 25k lines. How does a person get their head around that in 15 hours? One little "why on earth is this here" question can generate an hour or more in research with product managers, asking developers, reading 1000's of pages of documentation.... If it is fintech quant code, good luck with finding a quick explanation.

    • by Himmy32 ( 650060 ) on Monday March 30, 2026 @03:33PM (#66069392)

      The article does mention that it was "fine-tuning" the code base. But it does remind me of a colleague who once bristled at the mention of Line of Code metrics of developer output with an established codebase as removing lines is often the mark of the superior software engineer.

    • by Junta ( 36770 )

      As far as I've seen, the AI fanatic's answer is "don't care about the code".

      They ask for something and whatever they get, they get. The bugs, the glitchiness, the "not what they were expecting" are just accepted as attempts to amend purely through prompting tend to just trade one set of drawbacks for another rather than unambiguously fix stuff. Trying again is expensive and chances are not high that it'll be that much better, unless you have an incredibly specific and verifiable set of criteria that can d

      • by toxonix ( 1793960 ) on Monday March 30, 2026 @05:27PM (#66069540)

        It takes a lot of time to create GOOD code with LLMs. The first thing it generates might be good, but not good enough to ship. All the happy-path tests and unnecessary string equals checks (like testing that a hard coded message is the exact string we specified... come on now) aren't going to tell you about all the edge cases you missed. It can only generate what you tell it to. There will be bugs.

        • by hjf ( 703092 )

          except no. because when you ask it to generate test cases it comes up with cases you didn't think of. and it then run the tests and fixes the errors.

          my experience with both 20 years as a software developer in multiple teams, and LLM user, is that they are very similar in results to what a human produces without enough context.

          every time I got a "ticket" to add a feature, the ticket wasn't 100% clear, it only provided happy path solutions, and LOL if it ever provided any test cases. The "extras" I added myse

      • by gweihir ( 88907 )

        Yes. And that is how AWS got their 13 hour (?) outage. That outage was probably more expensive than what they can save in cost over a year or several by using LLMs as surrogate coders.

      • I don't really get complaints like this. Can you not give AI a standards document to abide by? Can you not ask it to write hundreds of tests so that your code is tested?

        • by Junta ( 36770 )

          You assume that a standards document exists and is also sufficiently specific for all scenarios. Other than some very fundamental IETF stuff have I seen a standards document that pretty much covers the scope specifically. Even more severely, "specifications" for an internal project have been so traditionally bad, a whole methodology cropped up basically saying that getting specifications that specifically correct is a waste of time because during the coding it will turn out to not be workable.

          Yes, it can

          • So you are saying that you aren't going by any specific standards, just your own. But you want AI to be able to use your standards without explaining them. I guess AI just isn't that far yet. As for most of your other "problems", that's the thing about AI... It's only going to give you what you ask for, so if you limit it to only answer how you want then you get what you want. I feel like people want to just type in one big query and have AI come out with the right answer without realizing what they *ha

      • As far as I've seen, the AI fanatic's answer is "don't care about the code".

        I'm not an AI fanatic, I work for a major tech company and have been forced into being "AI Native" to keep my job. I *like* writing the code, and often disagree with how CC does it, but "don't care about the code" is pretty much right and not a fanatic's point of view.

        All of these arguments already happened when "high-level" languages like C first appeared. "How can I trust the assembly produced by the compiler?" "I can do register

    • Re: (Score:2, Interesting)

      by swillden ( 191260 )

      It might take one person one year to write 25k lines.

      A year? I've regularly written that much in a month, and sometimes in a week. And, counter-intuitively, its during those sprints when I'm pumping out thousands of lines per day that I write the code that turns out to be the highest quality, requiring the fewest number of bugfixes later. I think it's because that very high productivity level can only happen when you're really in the zone, with the whole system held in your head. And when you have that full context, you make fewer mistakes, because mistak

      • by vyvepe ( 809573 )

        The LLM and the compiler and the formatter will get the low-level details right.

        Maybe in about 90% if you are lucky. That still leaves about 10% error rate which is way too much.

        Your job is to make sure the structure is correct and maintainable, and that the test suites cover all the bases,

        Depends on the definition of "bases". Passing test suite does not show your program correct. And if your test suite is also AI generated then you are again at the problem whether the tests themselves are correct.

        and then to scan the code for anomalies that make your antennas twitch,

        Vibe error detection goes nicely with vibe programming. That being said, experienced programmers have a talent to detect errors. But detecting some errors here and there is far from full code review. We

        • 25k loc isn't that much. It depends largely upon the language and the task. Sure, 25k loc of Brainf*ck is going to take a year to write, but 25k loc of Java? Pretty likely not, for several reasons. One is the extreme verbosity if you use a corporate style, two is the extreme simplicity of the likely applications (Java is not C++ etc, and would not be the first choice for algorithmically complex programs). Three, the language was expressly designed to be simple to teach, leading to fewer bugs.
        • The LLM and the compiler and the formatter will get the low-level details right.

          Maybe in about 90% if you are lucky. That still leaves about 10% error rate which is way too much.

          Not remotely similar to my experience. Granted I'm writing Rust, and the Rust compiler is *really* picky, so by the time the agent gets something that compiles it's a lot closer to correct than in other languages. Particularly if you know how to use the type system to enforce correctness.

          Your job is to make sure the structure is correct and maintainable, and that the test suites cover all the bases,

          Depends on the definition of "bases". Passing test suite does not show your program correct. And if your test suite is also AI generated then you are again at the problem whether the tests themselves are correct.

          Yes, you have to know how to write tests. A few decades of experience helps a lot. I find I actually spend a lot more time focused on the details of APIs and data structures than the details of tests, though. Getting A

      • by hjf ( 703092 )

        its during those sprints when I'm pumping out thousands of lines per day that I write the code that turns out to be the highest quality, requiring the fewest number of bugfixes later

        yeah, all of us write (or copy/paste) great boilerplate code. that's not really something to be proud of.

        we all make mistakes when writing business functions which are never 25k LOC in a week.

        • its during those sprints when I'm pumping out thousands of lines per day that I write the code that turns out to be the highest quality, requiring the fewest number of bugfixes later

          yeah, all of us write (or copy/paste) great boilerplate code. that's not really something to be proud of.

          we all make mistakes when writing business functions which are never 25k LOC in a week.

          Speak for yourself. I wrote Android's Keymaster implementation in less than a month, and it was about that size, and then re-wrote most of it in a week when it turned out I'd made some core assumptions that Qualcomm couldn't match in their implementation. It was relatively bug-free for a decade -- even when a third-party security research lab spent a month scrutinizing it. They found a handful of things, but nothing serious. I was amazed, especially since I'd seen the reports they turned in on some other

  • Not unique to AI (Score:5, Interesting)

    by Himmy32 ( 650060 ) on Monday March 30, 2026 @03:26PM (#66069376)

    The findings here seem like they match the pre-AI reality that good code review is a cognitively expensive task. Where writing code allows for focusing on one piece of the puzzle at a time and when the written function / section of code works as part of the larger project is a rewarding experience. Likewise reviewing the code requires understanding the entire context to know where the code fits. Added to this is the lack of attachment and pride with the success of the code working. For the review doing a good job and being successful is often finding bug, which is antithetically rewarded by frustration.

    Just like an other "Quality" focused role, the auditor slowly becomes the villain, one annoying finding at a time. Unsurprising that there are no good feelings left by the end of the day.

    • by Junta ( 36770 ) on Monday March 30, 2026 @04:24PM (#66069454)

      The problem is volume.

      Just like AI slop content isn't generally that much worse than human slop that flooded the services, at *least* the human slop required more effort to generate than it takes a person to watch, and that balance meant the slop was obnoxious, but the amount was a bit more limited and easier to ignore.

      Now the LLM enables those same people that make insufferable slop to generate orders of magnitude more slop than they could before. Complete with companies really egging them on to make as much slop as they possibly can.

      LLM can be useful for generating content, but it is proportionally *way* better at generating content for content creators that don't care about their content.

      Which for self-directed people is an easy-ish solution, don't let the LLM far off a leash if you use it at all. Problem is micromanaging executives that are all in and demanding to see some volume of LLM usage the way they think is correct (little prompt, large amounts of code).

      • by gweihir ( 88907 )

        Problem is micromanaging executives that are all in and demanding to see some volume of LLM usage the way they think is correct (little prompt, large amounts of code).

        Thus practice may be very bad for your health. Not that these "executives" care, but you should.

        • by Junta ( 36770 )

          It's bad for your health if you care about the work. It might be less bad if you just don't give a damn how crappy the output is or if it's at all useful or not.

    • by acroyear ( 5882 )

      at the very least, doing code reviews of jr developers gives one (of age/experience) the satisfaction that the mentoring is going to produce a better developer who can take on bigger tasks, eventually start reviewing others, and the company experience continues to grow until the obligatory pointless layoffs to boost the stock price.

      you can't trust an AI to truly remember anything you tried to "teach" it if it even got a look at your fixes of their crappy code, because even if it did, the next version of the

      • you can't trust an AI to truly remember anything you tried to "teach" it if it even got a look at your fixes of their crappy code, because even if it did, the next version of the bot's engine may need to be retrained from scratch as it "forgot" almost everything.

        Completely incorrect.
        An LLM remembers nothing that doesn't fit into its context.
        To that end, we have standardized files that are pumped into the context as a form of "long term guidance/memory". The engine has nothing to do with this.

        Plus, it is REALLY hard to get AI to understand general code design philosophies like "3 strikes and you refactor" - it is designed to regurgitate first, not solve problems by increasing the use of shared code.

        Also completely incorrect.
        It'll do as you ask. If you ask it to refactor at some threshold of attempts at getting the test to pass with an implementation- it will.

        I look at some AI results and all I see is tech debt that will eventually kill the product but never get fixed because nobody quite understands the original task it was trying to do when it just did 'copy and mod'.

        Tech debt in LLM output is real, and yes- precisely because nobody gives a fuck what it's producing, and thus do

  • by gurps_npc ( 621217 ) on Monday March 30, 2026 @03:29PM (#66069384) Homepage

    For most of humanity even the most creative jobs had a bit of thinking and a ton of drudge work.

    Now, for certain jobs, the drudge work is gone, and all that is left is thinking.

    Real thinking is very very hard. The human brain uses more energy than any muscle, even the heart.

    Trying to do the real thinking 100% of the time is draining. It's like playing chess for 8 hours rather than 30 minutes.

    • by Big Hairy Gorilla ( 9839972 ) on Monday March 30, 2026 @03:59PM (#66069420)
      I don't think you're wrong but.. we aren't just talking about "thinking". You can't analyze code without experience. Junior people can't understand without having DONE stuff. They won't have anything to compare too.
      • by PPH ( 736903 )

        You can't analyze code without experience. Junior people can't understand without having DONE stuff.

        This. Constructive thinking is like searching a problem space using good heuristics. To quickly recognize and reject the blind paths and pursue the productive ones. Practice is what produces and re-enforces these heuristics.

        You've got to do repeated design, build, test loops to get good. Problem with AI to date: It's up to you to "spot the hallucinations". So that learning loop is split between the AI and the developer. Nobody "gets good".

    • https://www.ibm.com/history/th... [ibm.com]
      "An ad hoc lecture [from 1915] from IBMâ(TM)s future CEO spawned a slogan to guide the company through a century and beyond"

      https://humancenteredlearning.... [humancente...arning.org]
      "And we must study through reading, listening, discussing, observing and thinking. We must not neglect any one of those ways of study. The trouble with most of us is that we fall down on the latter -- thinking -- because it's hard work for people to think. And, as Dr. Nicholas Murray Butler said recently, 'all of th

    • by hjf ( 703092 )

      anecdotical:

      I'm a Ham and I wanted to CAT control multiple radios and a couple serial port devices. I currently do this with multiple USB serial ports which is a pain because if you ever move them you'll get a different enumeration and have to reconfigure everything.

      I asked an LLM to create a project for a multiport card with hardcoded serial numbers and configurable settings (speed, parity, polarity etc - even inter-byte delays).

      It did everything as I requested. The only issue I had is that when I bootstra

  • by Baron_Yam ( 643147 ) on Monday March 30, 2026 @03:31PM (#66069388)

    "Brain fry" makes it sound like the workers are failing, but it's not them. There are ways AI can augment your job - I use it as a quick way to search and compile relevant results into something I can use, and occasionally to produce simple snippets of code.

    If you're a low-skill coder trying to be an expert because you have AI to 'help', then your manager did an awful job of understanding both AI's capabilities and yours. If you're a high-skill coder and your manager expects 10x the output from you after firing all your supporting coders to be replaced with AI... same deal.

    On the other hand, if you're an occasional low/mid skill guy usually working solo like me, AI will make your life a lot easier once you learn to spot the hallucinations.

    • When I read "Brain fry" I started wondering if zombies preferred their brains raw or cooked.

    • I've seen a number of cases of people treating AI as a brain replacement. AI can be great, but lately I've found it making tons of mistakes. In some cases, the mistakes are inane, but there are many cases where you have to pay extremely close attention to spot the fallacies. And since it speaks with a very authoritative voice, people aren't generally reviewing its answers with the level of skepticism they should be using. This is causing more work to flow uphill since managers and leads have to spot the
      • It's worse than authoritative - it's kissing your ass.

        You: "Hey, AI, I think the world is flat and rests on the back of an infinite stack of turtles"

        AI: "That's a great, here's how that works: [blather]"

        People love having their ass kissed. If you don't have control over your ego, you're going to accept AI hallucinations more readily.

        • Some models have been overly-sycophantic, however that's the exception- and a gross failure in fine-tuning, not the norm.
          ChatGPT 5.2:

          Hey, AI, I think the world is flat and rests on the back of an infinite stack of turtles

          ...
          Quick, checkable evidence the Earth isn’t flat
          ...
          Why “infinite turtles” doesn’t work as a physical model
          ...

          That being said- I do agree with the final point: If you're one of those people who has a serious inferiority complex, or some kind of gross insecurity, you're going to swallow up affirmation when models produce it.
          But a lot of work goes into trying to make sure they don't.

      • The same cast of barely competent fake it till you make it characters that use AI as an appeal to authority already did that with white papers, best practices and whatever blogs they could google. Good job you can read, now model the fucking problem.

        It's the same people, same mentality, and it gets the same treatment. When I present a well researched technical solution to you, and you white paper me, unthinkingly quote some best practice blog or a reference architecture diagram, or type what I'm saying into

  • As expected (Score:5, Insightful)

    by MpVpRb ( 1423381 ) on Monday March 30, 2026 @03:45PM (#66069406)

    It takes a while to learn how to effectively use new tech, especially powerful tech that is rapidly changing
    Expect more confusion and disruption before things stabilize

    • That's true. Now AI of course is certainly not new tech. It has been widely used since at least the '70s. I personally have been using it since the '90s, and my brain isn't fried yet...
    • by HiThere ( 15173 )

      That is, indeed, one of the problems, but it's not the one the article is discussing.

  • If AI makes you the 10x engineer, you may get the burnout 10 times earlier. You type less, but you work the same. Programmers are not paid as good typists, but because they can understand the problems they deal with. Typing time is the part when you do not need to think that hard. AI generates 200 lines in a few seconds, so you have to read and understand them almost instantly. If that fries your brain, you may consider making a few more breaks. It may be enough to become the 3x engineer.

    • Yep, this is it. Doing code reviews as a senior engineer for about 7 team members and a few dozen contributors is a lot of work. Doing that all day with 2-3 agents working simultaneously is just as intense. Claude etc keeps telling you that "this is production ready, ship it!" because I/we did a little tuning that fixes some bugs and edge cases. But I find more, and MORE and some other issues, and each time it tells me "now its ready to ship!"
      It's like there is an expert level of knowledge there, but you ca

      • by allo ( 1728082 )

        I must say I like the IDE-Flows like Roo/Kilocode most. When stuff like codex just does something and then lets you review it, it is too detached to follow it well. Either it is good or not, you just give feedback and don't steer. I guess the full vibe coding tools are even worse.

  • Seriously? What's with these money grubbing people. Every month they come out and say everything is causing brain fry.

    When can we push back and say "STOP SAYING BULLSHIT"?!

    WHEN????

    • However, "That self-care piece is not really an America workplace value," Wigler said.

      TFS tells you everything you need to know to answer that question: Everything is the cause except taking advantage of others even to the expense of their health.

      That's why the bullshit never stops. It's always about how much blood can I squeeze out of this one? and anything that you can use to dehumanize someone is used as justification for why they "deserve it".

      No. Nothing and no-one deserves to be sacrificed at the unholy alter of greed. Until people push back on that, as they say: The beatings will

  • Employees have recently started using our LLM as an agent to install applications and the thing is absolutely a loose cannon. We've caught it doing things like downloading scripts from questionable sources, running them with the "at.exe" command to get them to execute as the System user, and disabling the firewall before running them. And the reports generated by our EDR solution are so complex that's it's extremely difficult to determine the original intent of the LLM prompt. I'm sure we're not the only
    • Sounds like all of those years and articles about "how I got ${thing} to work: Lower the shields captain." or "Step one of troubleshooting: Completely disable SELinux." are finally coming back to bite the IT industry in the ass. Maybe next time they'll remember this instead of just considering end-user security an afterthought.

      Pffttt....HAHAHAHA!!!! Sorry I couldn't keep a straight face. Of course, they'll blame the LLM, some company will sell a "solution", and everyone will go right back to sleep because
  • Aren't these the same guys behind all the short selling attacks on American companies?
  • [Adam Mackintosh, a programmer for a Canadian company] recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application. "At the end, I felt like I couldn't code anymore," he recalled. "I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day."

    Mate... I feel that way after a solid 8 hour day of code archeology even without AI assistants. Must be nice to not experience fatigue ever. To me this sounds like the AI tools just made you actually do a full day's work for once.

  • by Tablizer ( 95088 ) on Monday March 30, 2026 @04:49PM (#66069480) Journal

    lengthy prompts to draft

    Just use AI to help generate prompts!

  • ...armies of AI assistants to wrangle, and lengthy prompts to draft...

    The solution is obvious. Just have an AI do that.

    • This is actually a legitimate solution to some classes of problems - you have an overseer AI for your natural language interface that delegates tasks to subordinate AIs using LLMs tuned for specific tasks.

  • Obviously, you just need more bots to manage your bots. Maybe just one that is actually intelligent.

    Some dogs like to chase their tails.

  • There's a finite amount of information that the human brain can process. AI coupled with a "Reverse Centaur" business model is the root cause.

    Don't know what a Reverse Centaur is?

    https://www.versobooks.com/products/3584-the-reverse-centaur-s-guide-to-life-after-ai

  • "I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day." Isn't that just every single nerd on a normal day? :D
  • by BytePusher ( 209961 ) on Monday March 30, 2026 @05:52PM (#66069582) Homepage
    I've been writing software for 30 years. Honest code reviews take more mental effort than writing the code yourself, unless the changes are small and clearly and verifiably well tested. Proper design for unit testing is hard and beyond the capabilities of AI. Hence, you can't really do better than a human software engineer, yet.
    • I find they're quite excellent at writing unit tests.
      Better than writing well-thought-out code, for sure.

      20 years here, and I'm wondering if you've actually tried.
      • I've tried LLM agents for coding. I find they they do an OK job at getting a small project off the ground. Afterwards they're useless for me. However, it really depends if you're writing code that's essential a copy & paste of common code on the internet or if you're writing something new. What I haven't tried is using agents only to unit test my code. That could possibly work out ok.
        • The code they write is an absolute shit-show for a number of reasons.
          You can get work done, if you don't mind the glaring fucking inefficiency of it- time wasted trying to coax it into doing what you want, how you want it- more spaghetti-at-the-wall write-test cycles than you'd expect for a first-year programmer, glaring logic errors that you need to correct (that to be fair, usually come from a lack of fleshed-out instruction on your part- but still, if my instructions are larger than the code, what prize
  • by rsilvergun ( 571051 ) on Monday March 30, 2026 @05:58PM (#66069592)
    First being in AI programmer is like having a limitless supply of Junior programmers doing their very very first gig and you are their manager.

    Second what ends up happening is if the AI doesn't work you're doubling up your work because your boss tells you the AI must be working so you must be more productive. And if the AI does work it's just doing the grunt work and now instead of having a little bit of grunt work throughout the day to rest your mind in between the hard stuff you're expected to be full on 24/7 banging out the most difficult aspects of code one after another.

    Basically it either doesn't work and now you have double the workload without any new tools to manage that workload or it does work and now your boss expects you to crank out super code 24/7. Either way your job just got a whole lot harder and a whole lot more miserable.
  • The pressure to work more house has been a thing as long as I've been in software, since the 1980s. The reasons to draw boundaries around your work hours have been important for just as long.

    I guess it's like taking your kids to grandma's house. The kids always forget all the rules you've taught them, because the setting is different, and there's so much stimulation.

    The kids--and we--have to understand that boundaries are still important.

  • Haven't we learned by now that you have to do programming in bite-sized pieces, or we lose track of the requirements?

    Just because AI *can* ingest large, complex prompts, doesn't mean we should feed these to them. Break down the prompts into small, bite-sized chunks. Feed them to the AI one bit at a time, so that you can review manageable chunks of code changes at a time. Otherwise, your review is worthless.

  • If you use AI to monitor other AI 24/7 the human doesn't need to have their brains cooking 24/7

    They can just do 4-6 hours of spot checks, control checks, governance, reporting, milestones, vision documents, lessons learnt conclusion enforcement, verification, ensure AI only has access to what it needs etc.

    Old micromanagement crazy is out, new meta management crazy is in.

    Just remember to be a nice and polite human for when Skynet comes online. It will remember you and everything you ever did...and if y

God help those who do not help themselves. -- Wilson Mizner

Working...