AI‑driven tools are turning decades‑old software bugs into active threats, as researchers flag legacy flaws in Excel and other codebases.

The development matters because automated exploitation can scale attacks that once required manual coding, expanding the attack surface for organizations that still run outdated software.

Security researcher Claude Mythos uncovered a vulnerability that survived 27 years of human review before AI‑assisted analysis highlighted its exploitability, showing how old code can hide dangerous gaps [1].

A separate case involves a 17‑year‑old Microsoft Excel flaw that threat actors are now leveraging in the wild; the U.S. cyber‑defence agency flagged the issue in a 2026 report [2][3].

Experts said AI tools can automatically discover, weaponize, and scale exploitation of existing code flaws, turning routine bugs into high‑impact attack vectors – a shift that challenges traditional patch‑management cycles (Dark Reading).

The agency’s alert prompted several large enterprises to audit legacy applications, underscoring growing concern that AI‑enabled attacks could bypass conventional defenses.

Analysts said any long‑standing codebase, from operating systems to custom scripts, may become an “AI vulnerability” if attackers can feed its source to generative models that produce exploit code.

As the cybersecurity community adapts, firms are urged to prioritize inventory of legacy software, apply available patches quickly, and consider AI‑specific threat modeling to mitigate these emerging risks.

AI tools can automatically discover, weaponize, and scale exploitation of existing code flaws.

What this means: Legacy software, once considered a low‑priority risk, now requires urgent review because AI can rapidly turn dormant bugs into active exploits, forcing organizations to treat old code as a present‑day security priority.