Anthropic: No "kill switch" for AI in classified settings

Axios Axios

Anthropic says it has no way to control or shut down its https://www.axios.com/technology/automation-and-ai" target="_blank">AI models once they're deployed by the Pentagon, according to a new court filing.

Why it matters: The Pentagon designated Anthropic a https://www.axios.com/2026/03/09/anthropic-sues-pentagon-supply-chain-risk-label" target="_blank">supply chain risk, contending the AI firm is inappropriately getting involved in how its technology can be used in sensitive military operations.


What's inside: Anthropic argues in the https://storage.courtlistener.com/recap/gov.uscourts.cadc.42923/gov.uscourts.cadc.42923.01208843394.0.pdf" target="_blank">filing to a federal appeals court in D.C. that it has no visibility, technical ability or any kind of "kill switch" for its technology once it's deployed.

  • The company also says the Pentagon has the opportunity to test models before deployment.

Catch up quick: The company's usage policies include no Claude for autonomous weapons or mass surveillance, red lines that the Pentagon dismissed as red herrings and led to the dispute.

Friction point: The Pentagon is arguing in court that Anthropic is a supply chain risk as the Trump administration https://www.axios.com/2026/04/16/white-house-anthropic-ai-mythos-government-national-security" target="_blank">moves to deploy its new Mythos model across the federal government.

  • Now, agency heads are scrambling to figure out how they can protect their systems from cyber attacks using Mythos, potentially complicating the administration's argument that the company poses a national security risk.

What's next: A hearing is scheduled for May 19.

Read full article at Axios →