David Sacks questioned whether Anthropic is using cybersecurity concerns to market its new AI model, Mythos, during a recent All-In podcast episode.
The debate highlights a growing tension in the AI industry between safety-first deployment and the perceived use of "fear-mongering" to create artificial demand for restricted technology.
Anthropic said it is withholding a general release of Mythos because the model could be weaponized by hackers [1]. To mitigate these risks, the company is limiting the release of Mythos to select organizations rather than the general public [1].
Sacks, a former AI adviser to former President Donald Trump, challenged the validity of these warnings. He suggested the company may be exaggerating the threat to increase the model's allure—a tactic he compared to a sales pitch [2].
"Anthropic has proven that it’s very good at two things," Sacks said. "One is product releases. The second is scaring people" [2].
While Anthropic describes the model as a potential "wicked weapon for hackers," Sacks said the alarm may be overblown [1], [2]. The disagreement underscores the lack of consensus on how to measure the actual danger of advanced AI capabilities before they are released to the public.
Anthropic has not responded to the specific allegations that its safety warnings are a marketing strategy. The company continues to maintain that a restricted rollout is the only responsible path forward for the Mythos model [1].
“"Anthropic has proven that it’s very good at two things. One is product releases. The second is scaring people."”
This conflict illustrates a strategic divide in AI deployment. By framing a product as 'too dangerous' for the public, companies can simultaneously signal extreme power and create an exclusive aura of prestige. If Sacks' assessment is correct, 'safety-gating' becomes a psychological tool for brand positioning rather than a purely technical precaution.



