Pentagon Designates Anthropic Supply Chain Risk; Company Plans Legal Challenge
The Pentagon designated Anthropic a supply chain risk effective Thursday, prompting Anthropic to say it will challenge the decision in court and spurring contractors to seek alternatives.

Microsoft says its customers can still access Anthropic products after supply chain risk designation

Anthropic CEO says 'no choice' but to challenge Trump admin's supply chain risk designation in court

Anthropic to challenge DOD's supply chain label in court | TechCrunch

Pentagon says it is labeling Anthropic a supply chain risk ‘effective immediately’
Overview
The Pentagon officially informed Anthropic leadership that the company and its products are deemed a supply chain risk, effective Thursday.
The designation follows a week of public disputes after President Donald Trump and Defense Secretary Pete Hegseth accused Anthropic of endangering national security and sought unfettered access for "all lawful purposes".
Anthropic CEO Dario Amodei said the action is legally unsound and that the company will challenge the designation in federal court.
Anthropic said more than a million people signed up for its Claude chatbot each day this week, lifting it past rivals in Apple's app store in more than 20 countries.
Anthropic said it will continue to provide its models to the Department of War at "nominal cost" for as long as necessary to enable a transition while it pursues legal options.
Analysis
Center-leaning sources frame this as a tense clash between national security imperatives and a tech firm's protective stance, emphasizing escalation through evaluative language (soured, berated, effective immediately) and political pressure (Trump posts) while still quoting Pentagon rationale and Anthropic's popularity — balancing but tilting toward the consequences of government action.
FAQ
The designation stems from Anthropic's refusal to grant the military unrestricted access to its AI models, specifically barring use for mass surveillance of Americans or fully autonomous weapons.
It requires Pentagon contractors to certify they do not use Anthropic's models in federal work, potentially disrupting operations and forcing alternatives, though its scope beyond federal contracts is legally disputed.
Anthropic CEO Dario Amodei called it legally unsound, retaliatory, and punitive, and the company plans to challenge it in federal court while offering continued access at nominal cost during transition.
Likely under the Federal Acquisition Supply Chain Security Act (FASCSA) or 10 U.S.C. § 3252, though experts question if required procedures like risk assessments and congressional notice were followed.
Anthropic's Claude is the only frontier AI model approved for classified use, integrated into systems like Palantir’s Maven for operations such as the Iran campaign, making the military reliant on it.