Pentagon Labels Anthropic A Supply-Chain Risk as Cloud Firms Keep Claude
The Defense Department labeled Anthropic a 'supply chain risk' after talks over military use; Amazon, Microsoft and Google said Claude remains available for non-defense customers.
Pentagon's chief tech officer says he clashed with AI company Anthropic over autonomous warfare

Amazon says Anthropic’s Claude still OK for AWS customers to use outside defense work

Pentagon Declares Major AI Company a Threat to Military Supply Chain

Google joins Microsoft in telling users Anthropic is still available outside defense projects
Overview
The Department of Defense designated Anthropic a "supply chain risk" on Thursday and President Donald Trump ordered federal agencies to stop using Claude while giving the Pentagon six months to phase it out.
Pentagon Undersecretary Emil Michael said in a podcast aired Friday that talks with Anthropic collapsed after the company resisted allowing "all lawful use" and barred mass surveillance and fully autonomous weapons.
Anthropic CEO Dario Amodei said the company has "no choice" but to challenge the supply-chain risk designation in court.
Microsoft, Google and Amazon said they will continue offering Anthropic's Claude to customers for non-defense workloads, and Amazon said it will support customers as they transition Department of War workloads.
Claude's removal could take months because integrations must be offboarded and replacements require security reviews and approvals, and OpenAI announced a deal to deploy its models on the military's classified networks.
Analysis
Center-leaning sources frame the story as a national-security confrontation between the Pentagon and a domestic AI firm, foregrounding official warnings and procedural authority while also including company rebuttals and expert criticism. Editorial choices — placement, verbs like "punished," and emphasis on warfighter risk — steer readers toward a conflict-and-risk narrative.
FAQ
The Pentagon designated Anthropic a supply chain risk after negotiations collapsed because Anthropic refused to allow 'all lawful use' of its Claude AI, specifically resisting uses for mass surveillance of Americans and fully autonomous weapons.
Anthropic CEO Dario Amodei stated the company has 'no choice' but to challenge the designation in court, as they had previously vowed.
No, Amazon, Microsoft, and Google stated they will continue offering Claude for non-defense workloads, and Amazon will support customers transitioning Department of War workloads.
The designation could lead to exclusion of Anthropic from DoD subcontracts via FASCSA authority, potentially affecting existing contracts, though non-defense federal work remains unaffected unless further actions are taken.