Microsoft Asks Court To Block Pentagon Blacklist Of Anthropic
Microsoft urged a judge to temporarily block the Pentagon's supply-chain designation of Anthropic as legal challenges mount and the Defense Department orders removal of its AI.
Internal Pentagon memo orders military commanders to remove Anthropic AI technology from key systems

What the Anthropic Lawsuit Means for the Future of AI in Warfare
Microsoft backs Anthropic in its legal fight against the Pentagon

Microsoft backs Anthropic in Pentagon blacklist battle, urges temporary restraining order
Overview
On Tuesday, Microsoft filed in U.S. District Court in San Francisco urging a temporary restraining order to block the Pentagon's designation of Anthropic as a supply chain risk for all existing contracts.
The Pentagon designated Anthropic a supply chain risk and an internal memo dated March 6 ordered military commanders to remove Anthropic AI products from systems within 180 days, the memo said.
Anthropic filed lawsuits on Monday alleging unlawful retaliation and constitutional violations, Microsoft and dozens of tech employees filed amicus briefs supporting Anthropic, and White House spokesperson Liz Huston criticized the company.
Anthropic was founded in 2021, is said to have reached a $380 billion valuation, had been the only AI provider on the Pentagon's classified cloud, and Microsoft pledged up to $5 billion in November.
A federal court will consider Anthropic's legal challenge and Microsoft's restraining order request while the Pentagon expands work with Google on GenAI.mil and Kirsten Davies said exemptions require approved mitigation plans.
Analysis
Center-leaning sources frame the story sympathetically to Anthropic by foregrounding its lawsuit and constitutional claims while using charged terms like "blacklist" and "retaliation." Editorial choices favor Anthropic’s voice, minimize government rationale or independent expert context, and emphasize procedural and civil‑liberties angles over national‑security justification.
FAQ
The Pentagon designated Anthropic a supply chain risk because the company restricted the military's use of its Claude AI model, refusing applications like domestic surveillance or autonomous weapons, which the Pentagon views as interfering with lawful military use of technology.[1]
Anthropic filed lawsuits alleging unlawful retaliation and constitutional violations; Microsoft urged a U.S. District Court for a temporary restraining order to block the designation; amicus briefs were filed by Microsoft and tech employees supporting Anthropic.[story]
The designation requires defense contractors to remove Anthropic AI products within 180 days, certify non-use in military contracts, and may compel companies doing business with the U.S. military or government to sever ties with Anthropic.[story]
Anthropic, founded in 2021 with a $380 billion valuation, was the only AI provider on the Pentagon's classified cloud; Microsoft pledged up to $5 billion in November; the Pentagon is transitioning to Google while allowing 6 months for phase-out.[story]
The Trump administration followed through on threats; Trump ordered federal agencies to stop using Anthropic; Secretary Pete Hegseth announced the designation and a 6-month transition period.