Forgot your password?
typodupeerror
IT Linux

Torvalds Tells Kernel Devs To Stop Debating AI Slop - Bad Actors Won't Follow the Rules Anyway (theregister.com) 53

Linus Torvalds has weighed in on an ongoing debate within the Linux kernel development community about whether documentation should explicitly address AI-generated code contributions, and his position is characteristically blunt: stop making it an issue. The Linux creator was responding to Oracle-affiliated kernel developer Lorenzo Stoakes, who had argued that treating LLMs as "just another tool" ignores the threat they pose to kernel quality. "Thinking LLMs are 'just another tool' is to say effectively that the kernel is immune from this," Stoakes wrote.

Torvalds disagreed sharply. "There is zero point in talking about AI slop," he wrote. "Because the AI slop people aren't going to document their patches as such." He called such discussions "pointless posturing" and said that kernel documentation is "for good actors." The exchange comes as a team led by Intel's Dave Hansen works on guidelines for tool-generated contributions. Stoakes had pushed for language letting maintainers reject suspected AI slop outright, arguing the current draft "tries very hard to say 'NOP.'" Torvalds made clear he doesn't want kernel documentation to become a political statement on AI. "I strongly want this to be that 'just a tool' statement," he wrote.
This discussion has been archived. No new comments can be posted.

Torvalds Tells Kernel Devs To Stop Debating AI Slop - Bad Actors Won't Follow the Rules Anyway

Comments Filter:
  • by CubicleZombie ( 2590497 ) on Friday January 09, 2026 @02:14PM (#65912852)

    Just don't commit anything you haven't reviewed or don't thoroughly understand.

    • Re:Use the tools (Score:4, Insightful)

      by Bert64 ( 520050 ) <bert@EULERslashd ... m minus math_god> on Friday January 09, 2026 @02:43PM (#65912928) Homepage

      Exactly this, the review process already exists and it exists for a reason.
      If code is crap it will get rejected, it doesn't matter wether it was barfed out by an LLM or typed out manually by an inexperienced human programmer.

      • It does take more time and effort on the behalf of the reviewer than the submitter. So it's potentially increasing the workload of maintainers dealing with 'helpdul' people 'researching' with llms
        • by ffkom ( 3519199 )
          That is exactly the problem - reviewers will be overwhelmed by AI-slop "contributed" by wannabes that seek their 15 minutes of fame by trying time and again to submit stuff that well out of their coding-assistant LLM. Linus will sooner or later not be able to evade addressing this issue.
          • by Bert64 ( 520050 )

            Then he won't even look at submissions unless they're from someone with a track record of submitting decent code. There will end up being multiple review stages.

      • by Anonymous Coward

        Exactly this, the review process already exists and it exists for a reason. If code is crap it will get rejected, it doesn't matter wether it was barfed out by an LLM or typed out manually by an inexperienced human programmer.

        In some hypothetical perfect world, yes. But here in the real world, LOL, nope. There have been serious bugs and security vulnerabilities that have gone unnoticed for YEARS, and they only came to light because some bad guy found them and started exploiting them.

        The ugly truth that nobody wants to admit is this: **Nobody**, not even the almighty Linus Torvalds, is carefully studying all those millions of lines of code. As the old saying goes, "Ain't nobody got time for that". If Linus doesn't underst

      • The problem is I think that LLMs make it possible for people to submit code at a far greater rate than before which means all the challenges of reviewing it are compounded. I don't know of a solution to the problem.
        • A partial solution is to reject submissions arbitrarily without looking at them or giving a justification. That's of course what the smart kernel devs are trying to get permission for, while Linus just shoots his mouth off as he often does.
    • How do you plan on enforcing that? There are still things in Commodore BASIC V2 I don't thoroughly understand

    • It is git ... and you are on your private branch.

      So: commit often.

  • by organgtool ( 966989 ) on Friday January 09, 2026 @02:16PM (#65912856)
    If the code is good, merge it! Otherwise, call it out and refuse it. Why is this so hard?
    • by DarkOx ( 621550 ) on Friday January 09, 2026 @02:21PM (#65912878) Journal

      Exactly, code is either correct, standards complaint, efficent, understandable, and licensed appropriately or it fails at being one or more of those things. Linus and the other maintainers are not about to start accept patches they don't like, understand, or lack proper attribution / documentation.

      So it really does not matter what the authoring process was, be how we normally think programmers work, the result of long conversations with inanimate plastic ducks, debates with the resident house cat, or the rust of some prompting with chat-gpt.

      • by DeHackEd ( 159723 ) on Friday January 09, 2026 @02:37PM (#65912914) Homepage

        Umm... since the kernel is GPL v2 specifically, doesn't it matter a lot what the license of the code generated is? What is the licensing status of the training data and does that directly affect the output? I feel like it does.

        • by allo ( 1728082 )

          Unedited AI output is not copyrighted at all. And edits would have to be GPLv2 (or a compatible license) to be accepted just like when you would write the complete code yourself.

          • Please don't give legal advice without the IANAL tag. There are some OSS projects out there that actively reject AI generated code due to legal liability. (Wine for example.)

            The US Congress doesn't agree with you. [congress.gov]

            Also, IANAL.
            • by allo ( 1728082 )

              If you take legal advice from Slashdot comments, a IANAL tag is the least of your problems. And every project is free to decide what code they accept, that doesn't have legal implications for other projects. Wine is careful because they are on the verge to be sued by Microsoft all the time, if they cannot prove the origin of all code. If they'd take AI code, the problem is not the AI, but that Microsoft can claim "It is our code" and they have no good way to proof it is AI and not copied e.g. from the Win 2

      • by serafean ( 4896143 ) on Friday January 09, 2026 @04:36PM (#65913304)

        It's not about code, it's about review effort.
        I was on the receiving end of an AI MR, which didn't even fix the bug it said it did, and the author visibly didn't even test it. I mean, even the added test cases didn't run. This idiot cost me 2 hours of my time, because I assumed good intentions and understanding.

        This is the "slop" that needs to get evicted quickly, right at the beginning of the review process, otherwise already overstretched maintainers will quickly burn out.

        • by HiThere ( 15173 )

          But that's an argument for moving code from that "developer" into the "maybe I'll consider this if I've got time" bucket. Not an argument against AI. (I've known developers that didn't use AI that were as bad as you're describing.)

          • And that's what the mailing list thread is about: how to formulate the criteria for immediate rejection.
            Interestingly it seems to have got hung up on the word "slop" for various reasons.

    • by itiswhatitiwijgalt ( 6848512 ) on Friday January 09, 2026 @03:15PM (#65913016)

      Have you had to review a lot of this AI slop? It is a HUGE waste of the reviewers time. Most of the devs will use AI and never look at the code or actually test it. They are screwing over the reviewers by making them do it for them. It is just plain lazy and an a-hole move. Good testing can prevent most if it, but... time and resources.

      • by Junta ( 36770 )

        Well for me, if I see an AI looking pull request, I will just nope out of it saying it needs a deeper write up to guide a review. If they do manage to coherently put something together that is consistent and sensible with the code, then and only then will I expend time looking at non-trivial pull request. If I see further sloppy mistakes or unmantainable code, then I'll again abort and say I did a partial review and already see some problems.

        Pretty much have to get used to ignoring suspiciously big merge

    • by gweihir ( 88907 )

      The problem is that LLM code looks god while usually being bad. Ordinary bad code is a lot easier to spot.

      • The problem is that LLM code looks god while usually being bad.

        I'm guessing (hoping) that was a typo and you meant "good", but also fear that's what many people think, or come to believe about AI and LLMs.

    • by Anonymous Coward

      The problem is not a good-faith effort by engineers using AI tools. The problem is that there are literally millions of morons in the world who would kill to get a $200k a year Silicon Valley job, and they think a shortcut to doing that is to ask Claude or ChatGPT to prepare a patch for a prominent open source project and to try to get it accepted. It's already the case that people are asking AI tools to do security reviews of open source projects and this nonsense slop is using up volunteer resources. It w

      • Re: (Score:2, Informative)

        by Anonymous Coward

        given the Biden sanctions on Russia

        Repeating a lie doesn't make it true. The sanctions were on any employee of a list of specifically named sanctioned employers, not on "Russia" (or "Russians"). All other random Ivan Ivanoviches or Ivanova Maria Ivanovnas can still contribute freely.

    • Let me guess: You're not a programmer, are you? The key unanswered question is: is the code good?
  • I know they say this every time. But this is it! They're finally gonna fork the kernel!
  • Either it meets the high standards required by the kernel team or it doesn't. It doesn't matter if it was written by AI, aliens or Linus himself.

    I use AI tools when coding and I've used it to generate code at times, but I read through it with a fine-toothed comb, test it thoroughly, and don't commit anything I don't 100% understand. I think anyone working on the kernel is easily capable of the same thing.

    • This is it exactly. There's not even any point to trying to figure out whether it's idiot-generated slop or AI-generated slop. Just figure out whether or not it's slop, and then reject it if it is.

      • This is it exactly.

        It’s exactly true only for the easiest failure mode: obvious junk. Nobody needs an AI detector to reject garbage. Kernel maintainers have been rejecting human-generated garbage since before LLMs darkened the software dev community's skies. The hard problem isn’t slop. The hard problem is credible-looking patches that meet the immediate spec, match local style, compile cleanly, and still encode a subtle bug or a long-term maintenance tax.

        There's not even any point to trying to figure out whether it's idiot-generated slop or AI-generated slop.

        If all you care about is trash vs not-trash, sure. But ker

        • I could write a better defense of slop using AI. Humans can create exactly the same problem as the LLM so you have to do the same amount of review no matter where the code comes from.

    • Re:Code is code. (Score:5, Insightful)

      by rocket rancher ( 447670 ) <themovingfinger@gmail.com> on Friday January 09, 2026 @06:24PM (#65913610)

      [Reply to ConceptJunkie]

      Either it meets the high standards required by the kernel team or it doesn't.

      That binary sounds great until you remember what “standards” actually means in kernel-land. It’s not just “passes tests” or “meets the spec.” The spec is the easy part. The standards also include: does it fit the subsystem’s design, does it avoid cleverness debt, does it behave across a zoo of arches/configs, does it keep the fast path fast, and can future maintainers reason about it without taking up candlelit lockdep seances.

      Also: kernel review isn’t a theorem prover. It’s a risk-management pipeline with finite reviewer attention. “Meets the standards” is often a judgment call made under time pressure, not a formally verified conclusion.

      It doesn't matter if it was written by AI, aliens or Linus himself.

      It matters a lot, and you accidentally picked the perfect trio to prove it.

      If Linus writes it, Linus can explain it, defend it on the list, revise it when a maintainer says “no, not like that,” and own the fallout for the next decade. Aliens can’t answer review questions. An LLM can’t show up on LKML and say “good catch, here’s why I chose this memory barrier, and here’s the perf data on Zen4 vs Graviton.” Origin matters because accountability matters.

      The kernel doesn’t merge diffs. It merges an ongoing relationship with an author who can justify tradeoffs and do follow-up when reality punches the patch in the face. That’s not politics, that’s maintenance. And it’s exactly why the proposed guidance [lkml.org] keeps circling around transparency and “make it easy to review,” rather than pretending there’s a magic AI detector.

      I use AI tools when coding and I've used it to generate code at times, but I read through it with a fine-toothed comb, test it thoroughly, and don't commit anything I don't 100% understand.

      Good. That’s the only sane way to use any generator, including StackOverflow and “I found this gist on a blog from 2013.”

      But “100% understand” is where the wheels start to come off. You think you tested it thoroughly, and it is this kind of innocent arrogance that stops a coding career in its tracks. In kernel code, you can understand what the lines say and still miss what they do when the scheduler, the memory model, the compiler, and three architectures start arguing in the hallway. Races, refcount lifetimes, RCU subtleties, error paths, and performance cliffs do not politely announce themselves during your “thorough testing.” Even experts rely on collective review, fuzzing, CI farms, and years of scar tissue because humans are not exhaustive-state-space machines. To claim you you've thoroughly tested anything is arrogance, not expertise. And here’s the AI-specific twist: LLMs are great at producing code that looks like something a careful person would write. That’s not “slop.” That’s plausibly-correct code that can sail through casual review and still be wrong in the exact corner you didn’t think to test. The dangerous patches are the ones that look boring.

      I think anyone working on the kernel is easily capable of the same thing.

      I think you’re describing the top slice of kernel contributors and then declaring policy based on the best-case. The kernel also has drive-by patches, corporate throughput patches, newbie patches, and “I fixed my one bug, I give zero shits about the downstream" patches. I'm guilty of this last one; I connected a 7.1 surround system via HDMI to my 4090, and watched my display go dark. Why did I have to go under the hood? Because EDID—the ancient, flaky, and apparently immortal Display Data Channel p

  • by gurps_npc ( 621217 ) on Friday January 09, 2026 @02:58PM (#65912972) Homepage

    I just saw an AI video about what if Harry Potter was raised by the Weasleys. Typical AI slop. Not worth viewing. The kind of thing that makes you hate the AI companies

    Except.... The song is good.

    Called "A Home Full of Love"

    This was not a 'real' song, they made it up. Probably the AI wrote it, but it turned out fantastic. It is emotional and enough to make an orphan cry. The kind of song that could become a hit if it was sung by a real singer.

    If you want to hear it, I suggest you close your eyes while listening to it because the visuals are not worth it. But the song is worth hearing.

    • A bowl of slop with one nice chunk of meat is still a bowl of slop.

    • by Rujiel ( 1632063 )
      Congratulations, you've been manipulated into emotional response by an imitation machine
      • by gurps_npc ( 621217 ) on Friday January 09, 2026 @04:57PM (#65913346) Homepage

        All good music is emotional manipulation. Bad music fails to do that.

        The fact it was created by an imitation machine rather than a person is significant, but not an insult.

        • by Rujiel ( 1632063 )

          Art involves manipulation on the part of a skilled musician or artist who is actively and intentionally moving you through a change of consciousness with their work. That is art. The manipulation of the perceiver performed by AI "art", however, is is not that. That is a different kind of manipulation--it is fundamentally insincere, and has no more feeling than an LLM does when it throws emojis at you.

          The creator has no insight on how to move you, their only input is approving and modifying the final result.

          • Art involves manipulation on the part of a skilled musician or artist who is actively and intentionally moving you through a change of consciousness with their work. That is art. The manipulation of the perceiver performed by AI "art", however, is is not that. That is a different kind of manipulation--it is fundamentally insincere, and has no more feeling than an LLM does when it throws emojis at you.

            You are gatekeeping. Beauty is in the eye of the beholder. (or the ear, in the above case...)

          • Art involves manipulation on the part of a skilled musician or artist

            Holy ivory tower, batman.

            Art is anything created with aesthetics in mind. Whether it's great art or worthy of review or critique or whatever else is an entirely different subject, and also completely subjective.

          • You could say this about most of the commercial music out there.

    • Here it is: https://www.youtube.com/watch?... [youtube.com]
  • A curious position: surrender to money.
  • This is basically the same position as his "we need no special handling for security bugs, we fix ALL bugs" position.

  • I agree with Linus, the bad actors won't follow the rules.

    I also tend to agree that it's best not to make this a political fight. The problem with AI slop isn't that it's AI-generated, it's that it's low-quality slop. Yes, the former is a strong indicator that it's also the latter, but rejecting code because it's low-quality slop rather than because it's AI-generated avoids a long-drawn-out argument that doesn't server any technical purpose. I do support an explicit provision allowing maintainers to blackli

MESSAGE ACKNOWLEDGED -- The Pershing II missiles have been launched.

Working...