Trump Orders Agencies To Drop Anthropic AI as Pentagon Labels Company A Risk
Trump ordered federal agencies to stop using Anthropic amid a Pentagon standoff over battlefield and surveillance uses, while the Defense Department moved to blacklist the company and Anthropic vowed to fight back.
Overview
President Donald Trump directed every federal agency to immediately cease using Anthropic's technology and allowed the Pentagon a six-month phase-out, he wrote on Truth Social.
The order followed a Pentagon ultimatum that gave Anthropic CEO Dario Amodei until Friday at 5:01 p.m. Eastern to accept contract language allowing 'any lawful use' of its AI model.
Defense Secretary Pete Hegseth said he would designate Anthropic a supply-chain risk barring military contractors from commercial activity, and Anthropic said it will challenge any such designation in court.
Anthropic holds a $200 million Pentagon contract awarded last July and its model Claude is deployed on classified Pentagon networks through a Palantir partnership, while roughly 700,000 tech workers urged firms to refuse Pentagon demands.
Anthropic said it will work to enable a smooth transition if offboarded and keep its models available as required, while the Pentagon warned it could invoke the Defense Production Act.
Analysis
Center-leaning sources frame this as a high-stakes clash between national security and politics by leading with Trump's dramatic Truth Social directive, highlighting the Pentagon's supply‑chain risk label, and using evaluative terms like 'setback' and 'apparent blow.' They prioritize official reactions (Pentagon, senator) and background concerns about weapons, shaping urgency and consequence.
Sources (47)
FAQ
Anthropic is maintaining two key safeguards: a refusal to allow fully autonomous weapons systems where AI makes final battlefield targeting decisions without human involvement, and a ban on using its technology for mass domestic surveillance[1][4]. The company states these guardrails should not be removed even as the Pentagon demands 'any lawful use' of the AI without restrictions[1].
Anthropic contends that the supply chain risk designation cannot legally apply to the company because statute 10 USC 3252 stipulates such designations can only apply to contracts directly involving the Pentagon[2]. The company argues that individual commercial customers and businesses with agreements with Anthropic would remain entirely unaffected by any such designation[2].
According to multiple sources familiar with the situation, it could take three months or even longer for the U.S. military to regain access to comparable AI tools on its classified networks if the Pentagon blacklists Anthropic's Claude platform[3].
Pentagon officials have threatened to invoke the Defense Production Act (DPA) to use Anthropic's product without the company's permission if an agreement cannot be reached[3]. Anthropic has noted this threat is contradictory to the simultaneous supply chain risk designation, as one labels them a security risk while the other implies Claude is essential to national security[4].
According to sources familiar with the situation, the Pentagon has reportedly accepted the safety protocols and limitations proposed by OpenAI, Anthropic's competitor, though no contract has yet been finalized[2]. Additionally, when Anthropic received its $200 million Pentagon contract in July 2025, it was awarded alongside three other U.S. frontier AI model makers: OpenAI, Google, and xAI[3].
History
This story does not have any previous versions.







































