loading . . . AI slop is ripping up the social contract between maintainers and contributors essential to open source development. Practitioners have been repeatedly assured that AI would supercharge their communities, but so far that hasn’t been the case. Just look at what happened last month. Mitchell Hashimoto’s Ghostty implemented a zero-tolerance policy where submitting bad AI-generated code gets you permanently banned. Steve Ruiz, Founder of tldraw, announced he would auto-close all external pull requests. Meanwhile cURL, the humble command-line tool that quietly powers approximately everything on the internet, just shut down its bug bounty program. After six years and $86,000 in payouts, Daniel Stenberg, founder and lead developer of cURL, pulled the plug. The reason? An AI onslop (pun fully intended).
Why is this a big deal? Because the era of “open contribution” as we know it might be coming to an end. For decades, there was a more or less implicit contract between open source contributors and maintainers. By agreeing to participate, contributors are rewarded by learning, building their resume, advancing their company’s objectives, and becoming part of something larger than themselves. Maintainers, meanwhile, agree to foster a community, guide the project, and mentor contributors. This contract meant that both parties could expect some bad PRs along the way, but historically bad PRs were expensive to create. Writing code took time. Understanding a codebase took effort. The very act of making a contribution (mostly) filtered out people who weren’t serious.
AI broke that filter. Now anyone can generate plausible-looking contribution with zero understanding and zero effort. The volume has overwhelmed the system. As Seth Larson, security developer-in-residence at the Python Software Foundation, put it:
> I am concerned mostly about maintainers that are handling this in isolation. If they don’t know that AI-generated reports are commonplace, they might not be able to recognize what’s happening before wasting tons of time on a false report. Wasting precious volunteer time doing something you don’t love and in the end for nothing is the surest way to burn out maintainers or drive them away from security work.
In this post I discuss the state of open source in the era of AI slop, and how maintainers, many of whom are already burned out, are coping. I look at several projects that have taken a stand, as well as representative generative AI policies coming out of prominent OSS foundations. I conclude with some general suggestions for moving forward.
## What is “AI Slop”?
Before we go further, let’s define terms. AI slop isn’t just bad code. The open source community has been dealing with low-quality contributions since forever. Just ask Linus Torvalds. He first announced Linux to the comp.os.minix newsgroup in 1991, and he’s been sending flame mails ever since. AI slop, in OSS terms, is what happens when someone pastes a GitHub issue into ChatGPT, hits enter, and submits whatever comes out without checking if it, you know, works. It’s bug reports that look legitimate at first glance but describe vulnerabilities that don’t exist. It’s pull requests that claim to fix problems the project doesn’t have. It’s vibe coding patches that feel plausible but contain hallucinated assumptions, or just crappy code.
Craig McLuckie, Co-Founder and CEO of Stacklok, had this to say about “vibe coded contributions”:
> It used to be that we could mark Github issues as ‘good first issue’ and ambitious young engineers would show up, cut their teeth on the issue and find their way to become contributing members of the community. It was good for us, and it was good for them, and most significantly it was good for the community.
>
> Now we file something as ‘good first issue’ and in less than 24 hours get absolutely inundated with low quality vibe coded slop that takes time away from doing real work. This pattern of ‘turning slop into quality code’ through the review process hurts productivity and hurts morale.
Many in the community are brainstorming and prototyping solutions. Hashimoto suggests a `git blame` equivalent for AI generated contributions in order to expose “true expertise or slop.” Chad Metcalf, CEO at Continue, who has thought deeply about this issue, created Leeroy, a “Transparent attribution for AI-assisted code contributions,” to address it. Andrey Vasnetsov, Co-Founder and CTO of Qdrant, merged “Add enable_hnsw option for payload field schema” to help maintainers sort through poor external contributions and slop specifically.
There is now an order-of-magnitude shift in the volume of garbage flowing into maintainers’ inboxes. It’s been growing steadily over the past couple of years and, for many it seems, January is the month the dam broke.
## Policy Evolution
<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><br /> See the Pen <a href="https://codepen.io/kholterhoff/pen/WbxzjYq"><br /> Untitled</a> by Kate Holterhoff (<a href="https://codepen.io/kholterhoff">@kholterhoff</a>)<br /> on <a href="https://codepen.io">CodePen</a>.<br />
Maintainer stances on AI submissions continue to evolve.
Stenberg began complaining about AI-generated bug reports in January 2024. By mid-2025 he reported that approximately 20% of all submissions to curl’s bug bounty program were AI slop. Only 5% of submissions that year identified genuine vulnerabilities—a “valid-rate” that had “decreased significantly compared to previous years.” In May 2025, Stenberg added a checkbox requiring submitters to disclose if they’d used AI. It didn’t help. In July, he publicly considered killing the bug bounty entirely. By January 2026, after receiving seven submissions in a sixteen-hour period (some were real bugs; none were actual vulnerabilities), he finally ended the program.
Hashimoto’s approach to AI contributions has also shifted. In August 2025 he began requiring mandatory disclosure. “If you are using **any kind of AI assistance** to contribute to Ghostty, it must be disclosed in the pull request.” But as of late January 2026 he adopted a zero tolerance approach. AI-generated contributions would only be allowed for accepted issues and maintainers. Hashimoto clarified:
> This is not an anti-AI stance. This is an anti-idiot stance. Ghostty is written with plenty of AI assistance and many of our maintainers use AI daily. We just want quality contributions, regardless of how they are made.
Ruiz’s response was the most drastic: tldraw now auto-closes all external pull requests, full stop. But his reasoning points to a more philosophical question for software development today:
> In a world of AI coding assistants, is code from external contributors actually valuable at all? If writing the code is the easy part, why would I want someone else to write it?
Like Hashimoto, Ruiz admits to using AI himself, and discovered that it would be easier to vibe code fixes than clean up AI generated PRs. Some of the worst PRs were arriving because his own AI scripts—Claude Code commands he’d written to capture (`/issue`) and address (`/take`) hastily written issues—were creating AI slop issues that Ruiz would close. But contributors were feeding those poor issues to their own AI tools, which were generating PRs based on Ruiz’s AI’s hallucinations: “My low-effort issues were becoming low-effort pull requests, with AI doing both sides of the work.” It was AI slop all the way down.
> someone opened up 7 security advisories today all of which did not make sense in the context of our project
>
> all clearly AI generated
>
> this gets in the way of use responding to actually valid reports that are made by people who understand our project https://t.co/WS7AxObOfl
>
> — dax (@thdxr) January 20, 2026
## The Undecideds
While the AI slop issue is widespread in OSS communities, the examples I highlight above are relatively unique for their decisiveness. Today, the vast majority of open source projects remain firmly on the fence, with maintainers and leads actively debating how best to approach this moving target. Debian is one example. For a detailed account of the back-and-forths going on there, see Joe Brockmeier’s write-up: “Debian AI General Resolution withdrawn.” FluxCD is another. According to Stefan Prodan, Core maintainer of Flux CD:
> “We haven’t decided yet how to handle this in the FluxCD org, it’s a CNCF project so we’ll need to align with others. We are experimenting in Flux Operator with system prompts and next we’ll add skills. Here is the AI guideline we have now: https://github.com/controlplaneio-fluxcd/flux-operator/blob/main/AGENTS.md”
There’s wisdom in the wait-and-see approach. The AI tooling landscape is changing month by month—what constitutes “slop” today may be indistinguishable from human-written code tomorrow, and policies written in January can feel obsolete by June. Many open source projects move at the speed of consensus, not decree. Projects like Debian operate through general resolutions and lengthy mailing list debates precisely because legitimacy comes from community buy-in, not top-down mandates. The Linux kernel’s development process, famously governed by Linus’s taste and a distributed network of maintainers, takes years to absorb major procedural shifts. Even smaller projects often lack the single-maintainer authority structure that allowed Stenberg, Hashimoto, and Ruiz to act decisively—many are governed by steering committees, consensus models, or simply vibes-based collaboration where no one person can unilaterally close the door on AI contributions. When it comes to the tsunami of AI generated contributions, many are electing to ride the wave for now rather than get out of the sea entirely.
## What OSS Foundations Are Saying
You might wonder: don’t we have institutions to help navigate through this? Don’t the various open source foundations have policies? They do! Sort of. Kind of. If you squint. The problem is that most of these policies don’t address the AI slop crisis maintainers are facing, and are instead focused on licensing. Who owns AI-generated code? What happens if Copilot regurgitates GPL code into your MIT-licensed project? These are real concerns! But they’re not the concern that’s making Stenberg write blog posts titled “Death by a thousand AI slops.”
**The Linux Foundation** has an official Generative AI Policy that essentially says that AI-generated code is fine to contribute, as long as the licensing works out. They’re focused on ensuring AI tool terms are compatible with open source licenses and that contributors have rights to any third-party code that sneaks into the output.
**The Apache Software Foundation** published Generative Tooling Guidance back in 2023, recommending that contributors disclose AI usage in commit messages with a “Generated-by:” tag. They explicitly acknowledge this is “a rapidly evolving area” that will need constant updates.
**The Eclipse Foundation** has guidelines for committers that makes much of AI’s proneness to errors, reminding committers that it is your responsibility to ensure the accuracy. They also suggest including a disclaimer of AI generated code beneath the copyright and license header such as “Some portions generated by Co-Pilot.“
**The OpenInfra Foundation** adopted “Generated-By:” and “Assisted-By:” labels, and instructs reviewers to treat AI-generated code as coming from “an untrusted source” requiring heightened scrutiny.
This list is not exhaustive, but is representative of what I have seen. The problem is that many foundations have built policies for a world in which legal liability is the sole issue. Maintainer burnout? Quality control guidance? Not really their department. While maintainers are currently drowning in garbage, so far foundations have not stepped up to throw them a life preserver.
## The Nuclear Options
Some projects have opted to ban AI-generated code entirely. It’s a blunt instrument, and critics argue it’s unenforceable, but for projects that have taken this path, the ban serves multiple purposes beyond mere gatekeeping. It filters out drive-by contributors and sends a signal about what kind of culture the project wants to foster.
Gentoo Linux banned AI-generated contributions entirely in April 2024, citing copyright concerns, quality issues, and ethics (specifically, the environmental impact of training these models). Michał Górny, who proposed the ban, explained to the _Register_:
> I think it’s a good PR move for Gentoo right now … When a lot of projects are being enthusiastic about ‘AI,’ I feel that many Gentoo users really appreciate the old school approach to software engineering where humans matter more than ‘productivity.’
There’s something almost countercultural about Górny’s framing. In an industry obsessed with velocity and automation, Gentoo’s stance resonates with a community that has always valued craftsmanship over convenience.
NetBSD followed suit in its Commit Guidelines, classifying LLM-generated code as “tainted”—meaning it can’t be committed without prior written approval from the core team. The BSD projects have an additional concern: their permissive licenses mean they _really_ can’t afford to accidentally incorporate GPL code, which AI tools trained on GitHub have a nasty habit of reproducing. For a recap, check out “The Current State of the Theory that GPL Propagates to AI Models Trained on GPL Code” by Shuji Sado, Chairman at Open Source Group Japan.
What do these nuclear options tell us about the future of open source? First, they suggest that for some communities, the perceived risks of AI-generated code outweigh any potential productivity gains. These projects are making a deliberate choice to optimize for trust and provenance over throughput. Second, they highlight an uncomfortable truth that will only become more pronounced: enforcement is largely symbolic. As AI-generated code becomes increasingly indistinguishable from human-written code, detecting violations will shift from difficult to functionally impossible.
Today’s AI slop is obvious—the hallucinated function calls, the confident wrongness, the telltale verbosity. But the models are improving fast. Within a year or two, the question won’t be “can we detect AI code?” but “does it even matter if we can’t?” The real significance of these bans may not be their enforceability, but what they reveal about a growing segment of the open source community that views AI-assisted development not as an inevitability to be managed, but as a threat to the fundamentally human enterprise of collaborative software creation.
## The Incentive Problem
The AI slop crisis isn’t just a technical problem or a cultural one—it’s an economic one. And nowhere is this clearer than in the relationship between open source maintainers and the platforms that host their work.
In May 2025, GitHub launched a feature that lets users generate issues using Copilot. You describe your problem to the AI, it generates a bug-report-shaped slab of text, and you submit it. The GitHub blog post announcing this feature promised it would make issue creation “faster and easier—all without sacrificing quality.” Maintainers had some thoughts about that “quality” part.
Immediately, people started asking for a way to block Copilot-generated issues from their repositories. The response from GitHub was, essentially, “no.” The Copilot bot user can’t be blocked, and issues generated by Copilot don’t even identify themselves as AI-generated. Instead, they appear under the human user’s name with no indication that a robot did the typing. So you can’t filter on that either.
This is a non-starter for many AI slop-concerned developers. As Andi McClure explained in a May 2026 GitHub issue:
> If we are not granted these tools, and “AI” junk submissions become a problem, I may be forced to take drastic actions such as closing issues and PRs on my repos entirely, and moving issue hosting to sites such as Codeberg [a nonprofit code repository] which do not have these maintainer-hostile tools built directly into the website.
The fundamental problem is incentive alignment. GitHub makes money when people use GitHub and AI features increase engagement metrics. Prodan posted a damning diagnosis of the situation:
> AI slop is DDOSing OSS maintainers, and the platforms hosting OSS projects have no incentive to stop it. On the contrary, they’re incentivized to inflate AI-generated contributions to show ‘value’ to their shareholders.
This tension puts GitHub—as the dominant platform for open source development—at the center of nearly every conversation about AI slop. When GitHub decides that Copilot-generated issues can’t be blocked, that decision affects millions of repositories and the maintainers who tend them. Ruiz made this connection explicit when explaining tldraw’s decision to close external contributions entirely, suggesting the policy is contingent on GitHub’s tooling:
> This is a temporary policy until GitHub provides better tools for managing contributions.
The message to GitHub couldn’t be clearer: your tools are in danger of driving some maintainers away from open contribution models. But whether GitHub hears that message—or has the right incentive to act on it—remains a more open question. The company’s bet appears to be that AI-powered features will attract more users than they repel, and that maintainers who threaten to leave for Codeberg or self-hosted solutions will ultimately stay because the GitHub network effects are too strong to abandon. The question is whether the AI slop crisis will be the tipping point, or whether maintainers will adapt, begrudgingly, to a world where the platforms they depend on optimize for engagement over maintainability.
## Final Suggestions in the Age of AI Slop
> It’s a fucking war zone out here man. Maintainer morale at an all time low. I totally empathize with the projects that flip the table and ban all AI. I’m getting close to saying only maintainers and accepted issues can have any AI .
>
> — Mitchell Hashimoto (@mitchellh) January 16, 2026
If you’re a contributor: show your work. Engage as a human. Understand what you’re submitting. If you use AI, use it like Joshua Rogers did when he sent Stenberg a “massive list” of potential issues found using AI-assisted tools that resulted in fixing fifty real bugs. Rogers demonstrates how AI can be properly wielded without replacing critical thinking.
If you’re a maintainer: develop clear policies, document them publicly, and don’t suffer in silence. You’re not alone, and your frustration is valid.
If you work at a platform: build tools that serve maintainers, not just engagement metrics. Give them the ability to filter, block, and manage the flood.
If you work at a foundation: it’s time to address quality and burnout, not just licensing. The legal stuff matters, but your community is drowning right now.
And if you’re one of the people submitting AI slop to open source projects, hoping to pad your GitHub contribution graph with zero-effort PRs? Please stop.
**Disclaimer:** GitHub/ Microsoft is a RedMonk client. https://redmonk.com/kholterhoff/2026/02/03/ai-slopageddon-and-the-oss-maintainers/