OpenAI began rolling out a more permissive version of GPT-5.5 to vetted cybersecurity teams on Thursday [1, 2].
The release marks a strategic move to equip defenders with AI tools capable of identifying software vulnerabilities and analyzing malware. By limiting access to verified professionals, OpenAI aims to provide high-utility capabilities while reducing the risk that the model could be misused for offensive cyberattacks [1, 3, 4].
Marketed as GPT-5.5-Cyber and nicknamed “Spud,” the model is designed specifically for tasks such as patch validation and vulnerability triage [1, 3]. This rollout comes approximately one month after the debut of Anthropic's Mythos model [2].
Performance data for the new model varies by benchmark. According to Axios, GPT-5.5 is nearly as effective at finding and exploiting software bugs as Anthropic's Mythos Preview [1]. However, VentureBeat said that GPT-5.5 narrowly beats the Mythos Preview when measured on Terminal-Bench 2.0 [3].
OpenAI has not yet announced a wider public release for this specific version of the model. The current restricted access ensures that the tools remain in the hands of those tasked with defending digital infrastructure [2, 4].
“OpenAI is rolling out a more permissible version of GPT-5.5, nicknamed “Spud,” to vetted cyber defenders.”
The introduction of GPT-5.5-Cyber signals an intensifying arms race in AI-driven cybersecurity. By creating 'permissive' models that can exploit bugs to help defenders patch them, OpenAI is acknowledging that safety filters must be flexible for specialized professional use. This move puts direct pressure on Anthropic and other LLM providers to balance the tension between rigorous safety guardrails and the practical needs of security researchers.



