OpenAI is expanding access to its cyber-focused artificial intelligence model for European Union institutions [1].

This shift represents a strategic move to integrate advanced AI into the digital defense infrastructure of EU agencies. As cyber risks increase, the ability to detect and mitigate threats in real time becomes critical for maintaining the stability of government networks.

OpenAI is providing a model designed to help EU bodies detect cyber threats and strengthen digital defense [1]. There are conflicting reports regarding the specific version of the technology being deployed. Some sources said the model is GPT-5.4-Cyber [5], while others said the release consists of GPT-5.5 paired with Codex Security via the Daybreak platform [4].

While OpenAI moves forward with the EU, Anthropic is reportedly withholding its Mythos system from the bloc [2]. The Mythos system is also designed for cyber defense, but Anthropic has not shared the technology with EU institutions [3].

Industry observers said that Anthropic's decision may stem from regulatory caution [1]. The EU has implemented some of the world's strictest AI regulations, which may influence how companies choose to deploy specialized security tools within its borders [3].

OpenAI's approach involves direct talks with EU agencies to determine how the model can best support their specific security needs [3]. This collaboration aims to provide a layer of automated protection against increasingly sophisticated digital attacks [1].

Anthropic has not provided a detailed public explanation for the absence of Mythos in the EU, but the company continues to manage its global rollout based on regional regulatory environments [2].

OpenAI is providing a model designed to help EU bodies detect cyber threats and strengthen digital defense.

The diverging strategies of OpenAI and Anthropic highlight the tension between rapid AI deployment and the stringent regulatory landscape of the European Union. By integrating into EU institutions, OpenAI may gain a first-mover advantage in establishing the standard for AI-driven cybersecurity in the region, whereas Anthropic's caution reflects a risk-averse approach to compliance with EU AI laws.