Security researchers used Anthropic's Mythos AI model to uncover critical security flaws in Apple's macOS operating system [1].

The discovery demonstrates how generative AI can be used to automate the process of finding and chaining vulnerabilities, potentially lowering the barrier for sophisticated cyberattacks.

During tests conducted in April 2026 [2], the researchers used Mythos to identify and link two minor software flaws [1]. While each bug was insignificant on its own, the AI helped the team combine them to enable memory corruption. This process created a proof-of-concept exploit that could allow an attacker to gain full control of a target system [1].

The use of the Mythos model highlights a shift in vulnerability discovery. Rather than relying solely on manual code review, researchers are now using AI to map the relationships between disparate bugs, a process that often takes human analysts significantly more time.

Apple is currently reviewing the findings provided by the researchers [1]. The team said that the goal of the project was to demonstrate AI-assisted vulnerability discovery and highlight emerging threats to global cybersecurity [1].

This incident follows a growing trend of security professionals using large language models to stress-test software. By simulating how an attacker might think, researchers can identify gaps in security architecture before malicious actors do. The macOS findings suggest that even highly secure systems may be vulnerable when AI is used to synthesize multiple minor errors into a single, critical point of failure [3].

The AI helped the team combine them to enable memory corruption.

The ability of AI to 'chain' minor bugs into a critical exploit marks a transition from using AI for simple code analysis to using it for complex strategic attacks. This increases the pressure on software vendors to move beyond patching individual bugs and instead focus on how multiple low-risk vulnerabilities can be combined to compromise an entire system.