Anthropic's AI model, known as Mythos, has discovered thousands [1] of previously unknown software vulnerabilities, sparking global cybersecurity concerns among banks and governments.

This development is critical because the ability to automatically locate and exploit software bugs could allow bad actors to weaponize vulnerabilities on a massive scale [4, 5]. The discovery has triggered what some describe as cybersecurity hysteria across the technology and financial sectors [1, 2].

Concerns erupted in April 2026 as the model, also referred to as Claude Mythos or Project Glasswing [1], began identifying flaws in critical infrastructure. Major banks, technology giants, and government agencies worldwide are now working to patch these gaps [1, 3]. In Canada, companies have begun bracing for a wave of software fixes to address the risks exposed by the model [3].

While the scale of the discovery has caused panic, some security experts argue that the underlying risk is not new. The ability for AI to find bugs is an evolution of existing threats rather than a sudden shift in the landscape.

"The capability they're worried about is already here," Hugh Son said [1].

Organizations are currently in a race to secure their systems before external attackers can utilize the same AI-driven methods to find the same flaws [5]. The situation has forced a rapid acceleration of security audits across global networks to prevent potential breaches.

Mythos discovered thousands of previously unknown software vulnerabilities

The Mythos incident highlights a pivotal shift in the cybersecurity arms race where AI can identify vulnerabilities faster than human engineers can patch them. While the 'hysteria' may be amplified by the suddenness of the discovery, the core issue is the democratization of high-level exploit discovery, potentially lowering the barrier for sophisticated cyberattacks against global financial and government infrastructure.