AI-generated security reports are flooding open-source software projects, creating a volume of credible-looking bugs that human reviewers cannot keep pace with [1], [3].

This trend threatens the stability of the global software ecosystem by diverting critical developer attention away from genuine vulnerabilities toward AI-generated "slop" [1], [3]. Because these reports appear legitimate, maintainers must spend significant time vetting each one to ensure no real security holes are missed [1].

The surge in automated reporting has begun to impact how projects incentivize security research. Curl plans to end its bug-bounty program in 2026 due to this flood of AI-generated spam [2]. Such programs were designed to reward researchers for finding real flaws, but the ease of using AI to generate plausible reports has made the model unsustainable [2].

Security researchers and maintainers across platforms like GitHub and GitLab are seeing a rise in reports that mimic the structure of professional vulnerability disclosures [1], [3]. AI tools can rapidly produce large numbers of these documents, allowing actors to swamp repositories with reports that look professional but lack substance [1], [3].

This onslaught has created a paradox where the tools meant to help find bugs are instead burying them. Maintainers are now forced to act as filters for AI noise, a task that consumes hours of unpaid labor for many open-source contributors [3].

While the trend has been building since 2024, the scale of the problem is now reaching a breaking point for smaller projects [1], [2]. Without a way to efficiently distinguish between human-verified vulnerabilities and AI-generated noise, the process of securing open-source code is becoming slower and more resource-intensive [3].

AI-generated security reports are flooding open-source software projects

The shift toward AI-generated reporting represents a systemic risk to the 'many eyes' theory of open-source security. When the volume of noise exceeds the capacity of human reviewers, critical vulnerabilities may remain undetected because they are hidden among thousands of fake reports. The termination of bug-bounty programs like the one at Curl suggests that traditional incentive structures for security are failing in the age of generative AI.