Venture capitalist Chamath Palihapitiya and entrepreneur David Sacks warned that exaggerating AI risks during the rollout of Anthropic's Mythos model harms the industry [1, 2].
These criticisms highlight a growing tension between the marketing of AI capabilities and the communication of existential risks. If the industry is perceived as using fear to drive attention, it may undermine public trust and invite restrictive regulation based on hype rather than technical reality.
Palihapitiya addressed the strategy surrounding the new model in a media interview. He said that "crying wolf does not serve the AI industry well" [1]. His comments suggest that creating unnecessary alarm regarding the capabilities of the Mythos model could create a credibility gap for the entire sector.
David Sacks, a former AI adviser to President Trump, echoed these concerns during an appearance on the All-In podcast [2]. Sacks questioned whether the alarmism surrounding the release was a calculated sales tactic. He said that "Anthropic has proven that it's very good at two things. One is product releases. The second is scaring people" [2].
Both investors expressed concern that the tendency to overstate risks may hinder the responsible development of artificial intelligence [1, 2]. By framing the rollout of new models through a lens of fear, companies may be prioritizing short-term visibility over long-term industry stability.
The debate centers on whether the perceived dangers of the Mythos model are genuine technical concerns or a strategic method to differentiate Anthropic from its competitors. Palihapitiya and Sacks suggest that the latter approach risks a "boy who cried wolf" scenario, where actual future risks are ignored because previous warnings were viewed as marketing ploys [1, 2].
“"Crying wolf does not serve the AI industry well."”
This critique reflects a divide in the AI ecosystem between 'safety-first' narratives and a pragmatic investment approach. By labeling risk warnings as sales pitches, prominent investors are challenging the legitimacy of 'doomsday' rhetoric used by AI labs to garner attention, suggesting that the industry's credibility depends on a shift toward transparent, evidence-based reporting of model capabilities.





