The U.S. Department of Defense continues to blacklist Anthropic as a supply-chain risk despite the emergence of the company's Mythos AI model [1, 2].

This distinction creates a complex regulatory environment where a single company is viewed as a national security threat by the military while its specific products are considered viable for other government applications.

Emil Michael, the Pentagon Chief Technology Officer, addressed the situation during an interview on CNBC's "Squawk Box" this Friday [1]. Michael said that "Anthropic remains a supply chain risk, and we need to transition away from its models" [1]. He said that the company is still blacklisted despite the distinction regarding the Mythos program [3].

According to Michael, the Mythos model is treated as a separate national security moment [2]. This allows the DoD to maintain its restrictions on Anthropic's general models while viewing Mythos as a distinct issue [1, 2].

To enforce these security standards, defense contractors must certify that they do not use Anthropic's Claude models in their work with the military [1]. Government departments have been granted six months or more to complete the transition away from these models [2].

Anthropic has attempted to fight these restrictions in court. The company sued the Trump administration in March 2026 to reverse the blacklisting [1]. However, a federal appeals court denied the company's request for a stay on the blacklist on April 8, 2026 [4].

The Pentagon's stance contrasts with reports that other arms of the administration are encouraging the use of the company's technology. Reports indicate that Treasury Secretary Scott Bessent and Fed Chair Jerome Powell have urged major Wall Street banks to test the Mythos model for cybersecurity vulnerabilities [5].

"Anthropic remains a supply chain risk, and we need to transition away from its models."

The Pentagon's decision to maintain a blacklist on Anthropic while carving out an exception for the Mythos model suggests a fragmented federal approach to AI risk management. By labeling the company a 'supply-chain risk' but treating a specific product as a 'separate moment,' the U.S. government is attempting to balance extreme security caution in defense with a desire to leverage cutting-edge AI for financial stability and cybersecurity.