curl Project Faces AI-Generated Fake Vulnerability Reports

In an alarming escalation of bogus security filings, the venerable open-source project curl—celebrating its 25th anniversary in 2023—reports being inundated with AI-crafted vulnerability submissions that waste maintainers’ time and threaten the integrity of its vulnerability triage process.
Background: curl’s Ubiquity and Security Model
curl, and its library counterpart libcurl, powers billions of Internet transactions daily, supporting protocols including HTTP/1.1, HTTP/2, HTTP/3 over QUIC, FTP, SFTP, SMTP and more. The project follows a strict coordinated vulnerability disclosure (CVD) policy, accepts reports via HackerOne (with optional bounties), GitHub Issues, and direct mail. Recent stable releases (8.2.0 and 8.3.0) added full HTTP/3 stream prioritization and advanced TLS session resumption, making robust security review essential.
The Flood of AI “Slop” on HackerOne
On LinkedIn, curl founder Daniel Stenberg warned, “A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time.” He says four suspicious reports surfaced in early May 2025 alone, each perfectly formatted in English, peppered with bullet-points, and accompanied by patch suggestions that simply do not apply to the codebase.
- Report “HTTP/3 stream dependency cycle exploit” claimed a race condition leading to RCE.
- Proposed patches targeted an outdated Python proof-of-concept tool, not libcurl itself.
- Reporters could not answer follow-up questions about curl’s build system, instead pasting their AI prompt ending with, “and make it sound alarming.”
Technical Dissection of a Bogus Report
The most egregious submission alleged a novel exploit in HTTP/3’s PRIORITY frames. In reality, curl’s implementation (since v8.1.0) uses controlled stream dependencies via ngtcp2
and enforces strict flow-control and scheduling to prevent injection or deadlocks. The malicious server setup supplied by the reporter failed to compile, cited nonexistent functions (e.g., curl_h3_handle_cycle()
), and ignored curl’s ASIO-based I/O engine.
Broader Impact on Open Source Security
curl is not alone. Security teams at OpenSSL, libssh, and GitLab have similarly reported an uptick in AI-generated noise over the past quarter. In June 2025, GitLab’s Chief Security Officer noted a 300% rise in low-quality reports flagged by their ML-based triage system, leading to longer response times for genuine submissions.
Expert Opinions and Community Response
Seth Larson, Security Developer-in-Residence at the Python Software Foundation, warns that “this trend undermines trust in bug bounty programs and shifts resources away from real vulnerabilities.” Tobias Heldt of open-source security firm XOR advocates a deposit or bond system: “Requiring a nominal fee ensures reporters are serious and filters out automated slop.”
Recommendations and Future Mitigations
- Stronger triage filters: automated detection of AI-style phrasing and unnatural bullet patterns.
- Reporter verification: require disclosure of tools used (LLMs, fuzzers) and proof of manual validation.
- Bounty bonds: small refundable deposits to deter low-effort submissions.
- Community audits: leverage volunteer security experts to peer-review high-profile bug reports.
Looking Ahead: Defense Against AI-Driven Noise
As large-language models become more accessible, open source projects must adapt their vulnerability management workflows. Enhanced metadata tagging, AI-powered spam classifiers, and collaborative reputation networks could help distinguish signal from noise. For curl and its peers, the challenge is clear: protect scarce maintainer bandwidth while preserving the open-report ethos that underpins decades of community-driven security improvements.