- Anthropic says its latest AI model can expose weaknesses in software security The Guardian —
- Anthropic Says Its New AI Model Is So Good at Finding Security Risks, You Can't Use It CNET —
- Anthropic limits access to Mythos, its new cybersecurity AI model Ars Technica —
- Anthropic claims newest AI model, Claude Mythos, is too powerful for public release CBS News —
- Anthropic claims newest AI model, Claude Mythos, is too powerful for public release CBS News —
- Why Anthropic won't release its new Claude Mythos AI model to the public NBC News —
- How dangerous is Mythos, Anthropic’s new AI model? The Economist —
- Anthropic Gives Tech Firms Early Access to Powerful AI Model Bloomberg —
- Anthropic gives our cyber stocks and other big tech names an AI stamp of approval CNBC —
- A federal appeals court has denied Anthropic’s request for relief from the Defense Department declaring it a supply-chain risk, complicating the legal battle between the government and the AI company Wall Street Journal —
- Anthropic Fails for Now to Halt US Label as a Supply-Chain Risk Bloomberg —
- US court declines to block Pentagon's Anthropic blacklisting for now - Reuters Reuters —
Anthropic's Mythos Model Restricted
Claude Mythos has demonstrated an unprecedented ability to uncover thousands of weaknesses in common software applications for which no defenses currently exist.
In response, Anthropic is limiting access to a select group of enterprise partners and government agencies to bolster national infrastructure defenses.
The decision coincides with a legal battle against the Department of Defense, which recently labeled the company a supply-chain risk.
Anthropic is also spearheading a new cybersecurity consortium, Project Glasswing, to manage the ethical deployment of its most advanced reasoning models.