
- Published on
- · 12 min read
4,600 Junk PRs: The Real Truth About AI Slop Hitting Godot
- Authors

- Name
- Casey (GhostedAgain)
4,600 Junk PRs: The Real Truth About AI Slop Hitting Godot
TLDR
Godot's GitHub has over 4,600 open pull requests, and maintainers say a growing chunk appear to be AI-generated. Lead maintainer Rémi Verschelde describes the situation as "draining and demoralizing." The problem is not unique to Godot. Blender, curl, Ghostty, and others are hitting the same wall. Generating a PR now takes seconds. Reviewing one still takes a real human. GitHub has started shipping new controls, and a community tool called Anti Slop reports a 98% detection rate (self-reported, not independently verified). Maintainer burnout is the real casualty, and funding more reviewers is the only honest fix on the table.
Table of Contents
- What's Actually Happening at Godot
- What "AI Slop" Actually Looks Like in Practice
- Godot Is Not the Only One
- The Core Problem: Costs Are Asymmetric
- What's Being Done About It
- What Maintainers Actually Need
- FAQ
- Conclusion
- Sources
Introduction
Rémi Verschelde has spent nearly a decade maintaining Godot, the free and open source game engine used to build games like Brotato and Until Then. He co-founded W4 Games to put commercial backing behind the project. As of mid-February 2026, he is publicly saying the project is struggling to survive its own contributor pipeline.
The reason is AI-generated pull requests. Not a trickle. A flood. And if it is not measurable, it is a vibe, so here are the numbers.
What's Actually Happening at Godot
The Numbers
Godot's GitHub had 4,681 open pull requests at the time Game Developer first reported on this, with Kotaku noting dozens are being denied or flagged as spam daily. A PR backlog of that size is not unusual for a major open source project. What is unusual is the reason it keeps growing.
Maintainers now evaluate every submission from a new contributor against a checklist they never had to run before: Was any of this written by a human? Does the author understand what their code does? Did they test it? Are the test results fabricated? That is four questions before anyone looks at the actual code.
What Verschelde Said
Verschelde posted about this publicly on Bluesky, and PC Gamer picked up the thread with direct quotes. His phrasing: the situation has become "increasingly draining and demoralizing." Maintainers are forced to "second guess every PR from new contributors, multiple times per day."
Godot has always made a point of welcoming contributors at all skill levels. That open-door policy is part of the project's identity. Now that same openness is what makes the project a soft target for the kind of volume it was never designed to handle.
Adriaan de Jongh, director of Hidden Folks, put it blunter in a quote captured by The Register: "a total shitshow." He described changes that make no sense in context, PR descriptions that are extremely verbose while saying almost nothing, and contributors who cannot answer basic questions about their own submissions.
What "AI Slop" Actually Looks Like in Practice
The term "vibe-coded," used by PCMag in their coverage, captures the dynamic accurately. A contributor pastes a prompt into an AI tool, takes whatever output appears, opens a PR, and hopes something sticks. They may not know what language the file is written in. They almost certainly have not run the code.
In practice, the signals maintainers report include extremely long PR descriptions that describe nothing of substance, code changes that are syntactically passable but contextually meaningless, and multiple near-identical PRs from the same contributor opened in quick succession. Test results that appear to have been generated rather than run are also showing up.
As for motivation, the community has no verified answer. Theories circulating on Hacker News and Reddit include clout farming, resume padding, and attempts to hit bounty program payouts. These are community speculation, not confirmed data. Worth keeping that distinction clear when the conversation gets heated.
What is confirmed: the behavior is real, it is spreading, and the people on the receiving end are done minimizing it. Name names or name sources. The maintainers are doing both.
Godot Is Not the Only One
Who Else Is Getting Hit
Godot is the name in the headline, but it is not the only project in trouble. This is a structural shift hitting open source broadly.
GitHub's own February 2026 blog post called it "the Eternal September of open source." The Eternal September reference comes from the early internet, when AOL opened Usenet to consumer users and the resulting wave of newcomers overwhelmed established community norms. An influx of contributors who do not know the norms, and who may not even be the ones writing the code, is changing what these communities look like from the inside.
Specific examples from The Register's coverage and the GitHub post:
- Blender proposed a formal AI contributions policy.
- curl ended its bug bounty program after AI-generated security reports, according to The Register, overwhelmed the program to the point where it stopped being useful.
- Ghostty moved to an invitation-only contribution model.
- Linux Foundation, Fedora, Firefox, Servo, and LLVM are all adopting or updating explicit AI policies.
- Gentoo has been exploring alternatives to GitHub, including Codeberg, with community members citing concerns about GitHub's AI integration.
Chet Faliszek, a writer who worked at Valve, summed up the frustration on social media, as reported by Kotaku: "Really is just exhausting to watch all this play out and GitHub promoting this, not fighting it."
That tension is not accidental. GitHub owns Copilot. Copilot is one of several AI coding tools that make generating code trivially easy, which feeds the dynamic maintainers are describing. If you want receipts on how GitHub Copilot stacks up against the competition, we ran the numbers for 30 days. That is a different article.
The Core Problem: Costs Are Asymmetric
GitHub named the structural issue plainly in their February post: pull requests can now be generated in seconds with AI. The cost to create has dropped. The cost to review has not.
That asymmetry is the actual problem here. It is not a Godot problem or a GitHub problem in isolation. It is a coordination failure that emerges when the cost to produce something collapses faster than the cost to evaluate it.
Before AI-assisted coding, submitting a pull request required the contributor to understand enough to write the code themselves. That friction served as a rough quality filter. The friction is gone on the creation side now. Review still requires a human who knows the codebase, understands the change, can spot the mistakes, and has time to spend.
Most open source maintainers are volunteers, or funded well below the scope of what they actually maintain. Verschelde acknowledged that using AI to detect AI slop would be "horribly ironic," but he also said it might become unavoidable. That is a statement about how bad things already are, not a solution.
What's Being Done About It
GitHub's Response
GitHub announced a set of features in February 2026 aimed directly at this problem. Already shipped: repo-level PR controls, pinned comments, and temporary interaction limits. Coming soon: PR deletion from the UI, and criteria-based gating that requires a linked issue before a PR can be opened.
GitHub acknowledged the problem publicly in their February 2026 blog post. The platform is naming the issue and building toward tooling. Whether the pace matches the scale of the inflow is a different question.
If you have had your own issues with GitHub as a platform, this story connects to a longer pattern. Some projects are actively considering leaving, and the reasons go beyond AI slop.
Community Tools
The community is not waiting for GitHub. A developer created the Anti Slop GitHub Action, an open source tool that runs 22 check rules across PR branches, title, description, file changes, and contributor history. It has 44 configurable options and runs in under 15 seconds.
The tool was battle-tested on Coolify, a 50,000-plus-star project that was receiving over 120 slop PRs per month. The developer reports a 98% detection rate, though this is self-reported and has not been independently audited. Checks include account age, global merge ratio, blocked commit author lists, and honeypot traps designed to catch AI-generated submissions.
Some projects are taking a harder line. Ghostty moved to invitation-only contributions. That protects the project, but it also closes the door on the kind of new-contributor growth that has historically been one of open source's structural advantages. It is a real tradeoff.
What Maintainers Actually Need
The answer that keeps surfacing in every serious conversation about this: money. Not a clever algorithm. Not a GitHub feature flag. Money to hire actual reviewers who can handle volume without burning out.
The tools being built are useful, and a self-reported 98% detection rate is genuinely strong if it holds at scale. But detection tools solve the filtering problem, not the morale problem. Maintainers who spent the last several months reviewing a parade of AI-generated submissions already absorbed that cost. It is cumulative and it does not reset when the tooling improves.
Platform norms matter too. When submitting AI slop becomes socially costly, fewer people will do it. Right now, the full cost sits on the receiving end. The tooling being built can shift some of that burden back. Community culture can reinforce it.
If you are a developer thinking about where AI coding tools fit into your actual workflow versus what is being described here as abuse, the distinction matters. Using AI to write code is not the problem. Using AI to spam code you do not understand at people who do is the problem.
FAQ
What exactly is "AI slop" in the context of open source pull requests?
AI slop refers to code contributions generated by AI tools and submitted by contributors who often cannot explain what the code does or whether it has been tested. The "slop" descriptor comes from the low-effort, high-volume nature of the submissions. They are typically syntactically passable but contextually broken, and they require real human attention to evaluate and reject.
Why is Godot specifically in the news for this problem?
Godot is a high-visibility open source project with an active community and a well-known commitment to welcoming new contributors. That combination made it a visible example. Rémi Verschelde, who has nearly a decade of work on the project, went public about the situation on Bluesky in February 2026 and the story spread from there. Godot may not be the worst-affected project, but it became the named case.
Is it wrong to use AI to write code contributions?
Using AI to help write code is not inherently a problem. The problem is submitting code you do not understand, have not tested, and cannot defend. That wastes maintainer time regardless of whether a human or an AI wrote the first draft. The standard is the same as it has always been: submit work you can stand behind and explain.
What can project maintainers do right now to reduce AI slop PRs?
Short-term options include enabling GitHub's repo-level PR controls, deploying the Anti Slop GitHub Action, adding a CONTRIBUTING.md that explicitly requires authors to test and explain their changes, and requiring a linked issue before a PR can be opened. Longer-term, funding additional reviewer capacity is the only option that scales without closing off new contributors entirely.
Has GitHub actually fixed this, or are they just talking about it?
Both, to different degrees. GitHub has shipped some tools and announced more. The Eternal September blog post from February 2026 shows they are treating this as a real platform problem. Whether the pace and scope of the response will match the scale of the inflow is still an open question. The community is not holding its breath.
What happened to curl's bug bounty program and why does it matter here?
curl ended its bug bounty program because AI-generated security reports had become so numerous that the program's costs outweighed its value. Bug bounties pay for valid reports, which creates a financial incentive to submit as many as possible. When submission is nearly free and detection is imperfect, the bounty pool gets drained on junk. curl's decision to shut down entirely is a preview of what happens when filtering fails and the costs stay entirely on the receiving end.
Conclusion
Open source has survived a lot of disruptions. The browser wars, enterprise extraction, GitHub centralization, and the cloud-service capture debate. This one is different in a specific way: the disruption is coming from inside the contribution pipeline.
The tools being built are real. GitHub's new controls are a start. The Anti Slop action has a strong detection record. The conversation is naming the behavior and making it more visible. These things matter and they are moving in the right direction.
What they do not fix is the time and trust already spent. Every maintainer who processed a wave of slop submissions before those tools existed paid a real cost. For a community built largely on volunteer labor, that cost compounds over months.
Funding maintainers is not a glamorous answer. It is the right one. If Godot, curl, Ghostty, or any project you depend on is in this position, the most direct thing available is financial support. The code is free. The people reviewing it are not.
Related Reading
- Cursor vs GitHub Copilot: 30 Days of Testing
- GitHub Nuked My Account at Midnight During Alpha Release
- Best AI Code Assistants in 2026
- AI vs Human Designers: When Each Makes Sense
Sources
- PC Gamer: Open-source game engine Godot is drowning in 'AI slop' code contributions (Feb 17, 2026)
- Game Developer: Godot veteran says 'AI slop' pull requests have become overwhelming (Feb 2026)
- The Register: Godot maintainers struggle with 'draining and demoralizing' AI slop submissions (Feb 18, 2026)
- PCMag: Godot Game Engine Is Drowning in Vibe-Coded AI Slop Contributions (Feb 2026)
- GitHub Blog: Welcome to the Eternal September of open source (Feb 12, 2026)
- Anti Slop GitHub Action
- Kotaku: Indie Game Engine Manager Says Getting Bombarded With AI Slop Is 'Increasingly Draining And Demoralizing' (Feb 2026)
Enjoyed this post?
Get new articles delivered to your inbox. No spam, unsubscribe anytime.
Related Posts
Feb 13, 2026
How I Built a 24/7 Memory Dashboard for My AI Agent in 48 Hours
My AI agent kept forgetting context between sessions. I built a dashboard called Lantern to fix that. Here is the actual implementation with code you can copy.
Feb 7, 2026
Stop! 7 Proven OpenClaw Security Fixes That Save Your Agent
341 malicious skills. 283 credential leaks. Your OpenClaw agent has filesystem access, API keys, and maybe your browser. Here's how to harden it.
Feb 17, 2026
Fix 7 Common OpenClaw Google Chat Errors (Step-by-Step Fixes)
Fix Google Chat errors in OpenClaw: service account 403s, audience mismatches, webhook 405s, quota exhaustion, and silent DM drops. Exact errors, exact fixes.

Comments