Anthropic Restricts Claude Mythos, Citing Severe Cyber Risks
Anthropic says Claude Mythos finds vulnerabilities across major operating systems and browsers and is being shared only with select defenders through Project Glasswing.

Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think

AI And Cybersecurity: A Glass Half-Empty/Half-Full Proposition, Where The Glass Is Holding Nitroglycerin

Anthropic’s new AI tool has implications for us all – whether we can use it or not | Shakeel Hashim

What Anthropic's too-dangerous-to-release AI model means for the AI race | Fortune

Anthropic’s powerful new AI model raises concerns about high-tech risks
Overview
Anthropic said it will not publicly release Claude Mythos Preview and is offering the model only to Project Glasswing participants for defensive use.
Anthropic said Mythos can identify vulnerabilities in every major operating system and browser and has found a 27-year-old bug plus multiple Linux kernel flaws.
Security leaders warned of faster, more sophisticated AI-assisted attacks, and US Treasury secretary Scott Bessent and Federal Reserve chair Jerome Powell convened finance executives about the risks, according to reports.
Project Glasswing includes roughly 40 to more than 40 major tech companies, and Anthropic said it will provide $100 million in usage credits and donate $4 million to open-source security efforts.
Anthropic and partners aim to use Mythos to find and patch critical software, but experts warned that without regulation similar capabilities will likely spread and defenses must become machine-scale.
Analysis
Center-leaning sources frame Anthropic as a responsible, capable industry leader by foregrounding its safety stance, partnerships, and revenue growth while downplaying systemic critiques. Language like "enviable position" and "responsible safety initiative," selective expert praise, and lead placement of business metrics emphasize commercial strength and risk-mitigating leadership over deeper regulatory or ethical concerns.