U.S. Bars Anthropic From Military Use, Sparking Business Fallout
Trump ordered agencies to phase out Anthropic after DoD labeled it a supply‑chain risk; Anthropic says it will contest the designation and is in talks to de‑escalate with the Pentagon.
Top CEO Exposes Rival’s ‘Dictator-Style Praise to Trump’

How Anthropic AI ban from Trump administration can escalate to existential business risk for fast-growing company

Anthropic CEO Dario Amodei calls OpenAI's messaging around military deal 'straight up lies,' report says | TechCrunch
Anthropic CEO: We're trying to "deescalate" Pentagon AI standoff to reach "some agreement that works for us and works for them"
Overview
President Donald Trump ordered federal agencies to stop using Anthropic and Defense Secretary Pete Hegseth designated the company a “supply chain risk,” prompting agencies and contractors to phase out its technology.
The action followed a standoff after Anthropic refused Pentagon terms allowing “any lawful use” and sought guardrails forbidding mass domestic surveillance and fully autonomous weapons.
Anthropic CEO Dario Amodei said the company is in talks with the Pentagon to de‑escalate and plans to challenge the supply‑chain designation in court, while some investors privately criticized his combative stance.
Anthropic now derives roughly 80% of its business from enterprise, has an annual revenue run rate nearing $20 billion up from about $14 billion only weeks ago, and completed a $30 billion funding round that valued the company at roughly $380 billion, sources said.
Defense contractors including Lockheed Martin said they will comply with the directive, OpenAI secured a classified Pentagon agreement, and Anthropic’s Claude reached the top of Apple’s free U.S. app rankings on Feb. 28.
Analysis
Center-leaning sources frame this story by foregrounding Anthropic's objections and public backlash while using selective editorial language (e.g., calling the DoD "the Department of War") and emphasizing Amodei's quotes. OpenAI responses are paraphrased and treated as defensive, and neutral expert perspectives or DoD context are underrepresented, shaping a skeptical narrative.
FAQ
Anthropic refused to grant the Pentagon unrestricted access to its AI model Claude, citing concerns over mass domestic surveillance and fully autonomous weapons.[8] The company argued that its contracts should not facilitate these uses, citing reasons that the technology isn't capable enough to support them safely and reliably.[8] Instead, Anthropic sought to maintain guardrails and usage restrictions through a multi-layered approach, including retaining full discretion over its safety stack, deploying via cloud with cleared personnel in the loop, and maintaining strong contractual protections.[8]
Legal experts have identified several procedural and statutory vulnerabilities with the designation. The statute requires the government to prove there is a risk of sabotage, subversion, or manipulation by an adversary, but experts like Amos Toh question how adversaries could exploit Anthropic's usage restrictions to sabotage military systems.[10] Additionally, the designation requires a completed risk assessment and congressional notification before action, which legal experts are unclear the government conducted.[10] The statute also requires the Pentagon to have exhausted less intrusive alternatives, but experts questioned whether the government made a good faith effort given how quickly the dispute escalated.[10] Legal analysts argue that Section 3252 defines supply chain risk as the risk that an adversary acts with hostile intent against the supply chain, not a vendor in a contract dispute.[1]
The designation restricts military contractors from working with Anthropic in performance of federal defense work, effectively blocking the company's $200 million contract with the government.[9] However, under the Federal Acquisition Supply Chain Security Act (FASCSA), supply chain risk orders are supposed to apply only to contractors' federal work and are not intended to preclude contractors from using Anthropic's products commercially.[7] Anthropic has noted that a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts and cannot affect the use of Claude to serve other customers.[8] The company derives roughly 80% of its business from enterprise clients, suggesting significant commercial operations remain unaffected by the designation.
OpenAI has capitalized on the situation, securing a classified Pentagon agreement while Anthropic faced the supply chain risk designation.[9] Despite the Pentagon's action against Anthropic, the company's Claude reached the top of Apple's free U.S. app rankings on February 28, 2026, indicating continued consumer demand.[9] Defense contractors including Lockheed Martin have stated they will comply with the Pentagon's directive to cease using Anthropic.[9]
Anthropic has completed a $30 billion funding round that valued the company at approximately $380 billion, with annual revenue run rate nearing $20 billion, up from about $14 billion only weeks ago.[9] CEO Dario Amodei stated the company is in talks with the Pentagon to de-escalate the situation and has vowed to challenge the supply chain risk designation in court, with support from industry leaders like OpenAI's Sam Altman.[9] Amodei called the designation "retaliatory and punitive" and noted it is unprecedented for an American company to face such action, though some investors have privately criticized his combative stance.[9]