PlayStation3 Emulator Devs Politely Ask Contributors to Stop Submitting 'AI Slop' Pull Requests (kotaku.com) 26
Open-source PS3 emulator RPCS3 "has been around since 2011," Kotaku notes, and has made 70% of the PlayStation 3's library fully playable, "bolstered in part by the many users who contribute to its GitHub page." But their dev team "took to X today to very kindly and civilly request that users 'stop submitting AI slop code pull requests' to its GitHub page."
Then they immediately proceeded to tell the AI-brain-rotted tech bros attempting to justify their vibe-coding nonsense to kick rocks in the replies, which is somewhat less civil but far more entertaining to read...
My favorite one was when someone asked how the team was certain they weren't rejecting human-written code, to which RPCS3 replied: "You can't possibly handwrite the type of shit AI slop we have been seeing."
My favorite one was when someone asked how the team was certain they weren't rejecting human-written code, to which RPCS3 replied: "You can't possibly handwrite the type of shit AI slop we have been seeing."
Idiots with computers (Score:3, Insightful)
Are still idiots. LLMs do not change that.
Re: (Score:2)
Thanks for delivering evidence for my claim. Also thanks for demonstrating the LLM cult members have no reasoning capability left.
As an added information, because you do not seem to habe noticed, Slashdot does not support UTF8. You probably do not understand what that means though.
AI Slop (Score:2)
Presumably if the code produced is high quality, they will be willing to accept it.
Re: (Score:1)
Note they didn't ask people to stop submitted AI code. They asked people to stop submitting AI slop.
Presumably if the code produced is high quality, they will be willing to accept it.
And that sounds pretty reasonable, TBH.
If the code is good, then we shouldn't care where it comes from.
Re: (Score:3)
If you're a good enough dev to determine the output of a LMM is valid and worth submitting as a change, you're likely good enough to not need an LLM to begin with. No one should be submitting AI generated pulls.
Re: (Score:2)
What other tools should we not use? Should we chuck IDEs, compilers, and computers and just whack out machine code on typewriters?
Re: (Score:2)
That sounds reasonable at first glance. At second glance you need to ask who will be expected to maintain it and answer questions about it. At that point, AI code is still a liability.
There probably is a middle ground with an LLM*, along the lines of "How do I do a shell sort?" and then looking at their example code and refactoring it. But simply cutting and pasting or, worse, asking Claude to write it, is bad, regardless of the quality of the code at the end of it.
(* Ignoring, for a second, the wider issue
Re:AI Slop (Score:5, Interesting)
If AI produced high quality outputs, people wouldn't complain about AI use, because there would be no way to tell things are produced by AI. They complain because they can tell, because it's largely garbage.
I have been working extensively through Claude Code, with usage paid by my job, and if you want anything mildly serious that isn't just webapp slop reinventing the same onboarding page over and over, it it takes poring over pretty much every single thing it produces to make sure it didn't just fuck it all up. You have to handhold it at every step of the way, constantly build up validation pipelines. It's still worth it to me, and I suppose my boss, because it allows me to rapidly begin developing with fairly specific tooling I have very little experience with, and I can just get an MVP out of the door in days instead of spending months on learning the proper way. But it's not without making short term sacrifices to code quality and accepting that it's a shortcut that's producing tech debt that'll be due later. It has a purpose, but people firing off that janky ass code that they don't intend to ever maintain into someone else's codebase must be insanely infuriating.
Re: (Score:2)
Re: (Score:2)
I have been working extensively through Claude Code, with usage paid by my job, and if you want anything mildly serious that isn't just webapp slop reinventing the same onboarding page over and over, it it takes poring over pretty much every single thing it produces to make sure it didn't just fuck it all up.
Hear hear! These are well spoken facts.
You have to handhold it at every step of the way, constantly build up validation pipelines.
And indeed here.
It's still worth it to me
And surprisingly...here too.
For all its faults, I too find myself using it constantly. Mainly because it allows me the freedom to "try it and see" for certain design decisions or to apply a blanket refactoring to old code bases that no other refactoring tool could possibly do. Or to just crank out scripts that technically work and get you the parsed results you need but you'd never try to maintain beyond the immediate need.
Those, and others, are the
Re: (Score:2)
People would complain, just not straight away. Poor quality slop is only one problem with AI generated code. There's also the issue of someone understanding it enough to maintain it later.
Re: (Score:2)
It's probably the comments with em dashes
Re: (Score:2)
If there is a human who can explain every line, I'd not consider it pure AI code. Even when it would be a unchanged output, you're the one to take responsibility. The problem are pull requests with content nobody knows what it really does.
aaaand now I'm curious (Score:2)
Can we have examples of the AI slop that couldn't possibly be human created?
When it goes wrong, how bad can it get, and what's it trying to do?
Examples (Score:4, Informative)
Can we have examples of the AI slop that couldn't possibly be human created?
When it goes wrong, how bad can it get, and what's it trying to do?
It's open source, just go look: https://github.com/RPCS3/rpcs3... [github.com]
First example I found (Page 1) https://github.com/RPCS3/rpcs3... [github.com]
Another one (Page 7) https://github.com/RPCS3/rpcs3... [github.com]
Re: (Score:3)
I think it's also strongly due to the process that AI PRs are being rejected. For one, they usually have:
Little utility
Overly invasive changes
Bloviated PR text that sounds legit but it absolutely full of shit and self-justifying
Outright lies on metrics or testing
And an author that feels offended you "caught them" and degenerates into just a huge name calling event or a "no I dint" "yes you did" "OK, I did a little" "No you did a lot" "OK, fine I did a lot but its still worthwhile" "No its not your patch to
Re: (Score:2)
We've had AI slop reports in a project I contribute to. The first one we got was a security vulnerability report, "you're not protecting against X, you need to apply countermeasures", and then listed the recommended countermeasures, which was a rephrased copy of the list of countermeasures that we were already applying.
Things haven't improved since then, the only real change is that it's now a lot harder to pick holes in the slop than it was for that first one because the slop extruders are getting better
Re: (Score:2)
Well, github seems to be throwing a fit, but I can say from my experiences:
- Uselessly verbose. One AI pull request I was asked to review was aiming to do a minor adjustment the layout of a singular webui element. The pull request was hundreds of lines of CSS because the LLM just started firing the random bullshit CSS cannon, often repeating itself probably because the operator said "no, it's still messed up" until by some miracle the one change he wanted managed to finally appear, alongside a whole bunch
Programming graffiti (Score:4, Insightful)
Re: (Score:3)
I have a friend who wrote a sentence of two into an AI song generator and he couldn't wait to show me the sentimental song "he wrote" that "helped him put down the music in his head"
So perhaps these people heavily using AI have somehow gaslit themselves into this fantasy that they're really connected to this ghost in the machine and they're legit programmers doing good work
Re: (Score:2)
Getting their name on a project they are a fan of is a big factor
Also, they *truly* think they can *finally* be useful without having to actually understand things. They think a code request will be accepted more readily than they ever got attention on feature requests or behavior changes. They think their willingness to let CodeGen go nuts is a differentiation over the developers who mifroght even be using the same CodeGen tools, but with more care. If nothing else, they see tokens as almost like curren
It’s getting useful. (Score:2)
I needed to create a program to retrieve solar generation plants, select a plant from a gui, enter date ranges, cost info, then pull daily usage data from inverters and meters. The API inconsistently mapped endpoint names making it hard to find where the data lived. .md and repo for checking into our internal gitlab.
Claude code sorted it all out in about an hour. It built test harnesses to verify correct endpoints. It built the gui and good looking excel outputs.
Built a really well documented
I haven’t
Been in the same situation with a Jr Dev (Score:2)
1. Formatting, no human, formats the code in an unreadable structure.
2. Complexity, out of the X ways to write something, you picked the machine like way, every time?
3. Obscurity, you don't understand what the code is doing because I don't, no one does, it's terrible!
4. Lack of useful comments, why didn't you comment the extremely complex thing you did?
5. Structure, only a machine would structure the files and layout the way they are.
6. Speed, you generat