Turso is retiring its security bug bounty program at the end of the current month [1].
The decision highlights a growing challenge for open-source maintainers as generative AI tools lower the barrier for submitting vulnerability reports. While these tools can assist legitimate researchers, they also enable a surge of low-quality submissions that drain developer resources.
Turso, an open-source database project, said that the majority of reports labeled as "CRITICAL" were actually low-quality noise [1]. The project described these submissions as "AI slop," referring to automated reports that appear significant but lack substantive security flaws [1].
This trend is not isolated to Turso. Other major open-source initiatives have faced similar pressures. The curl project also moved to discontinue its bug bounty program due to an avalanche of AI-generated submissions [2], [3]. Reports on the curl program's end date vary, with some sources citing the end of January [2] and others indicating February 2026 [3].
Bug bounty programs are designed to incentivize ethical hackers to find and report security holes before malicious actors can exploit them. However, the rise of large language models has allowed users to scan code and generate plausible-looking vulnerability reports without truly understanding the underlying logic. This creates a high volume of "false positives" that require manual verification by human engineers.
By ending the program, Turso aims to reduce the administrative burden of filtering through automated noise. The project will no longer offer financial incentives for these reports at the end of the month [1].
“The majority of reports labeled as "CRITICAL" were actually low-quality noise.”
The retirement of these programs signals a shift in the open-source security model. As AI makes it trivial to generate 'noise' reports, the traditional bug bounty—which relies on a manageable stream of high-quality human insights—becomes unsustainable for smaller teams. Developers may move toward more curated, invite-only security audits to prevent AI-driven denial-of-service attacks on their maintenance workflows.




