OpenAI Hardware Leader Resigns Over Pentagon AI Deal

Caitlin Kalinowski resigned citing rushed Pentagon agreement and concerns over surveillance and autonomous weapons safeguards.

Overview

A summary of the key points of this story verified across multiple sources.

1.

Caitlin Kalinowski resigned from OpenAI on March 7, 2026, saying she stepped down on principle after the company announced a partnership with the U.S. Department of Defense.

2.

CEO Sam Altman announced OpenAI's agreement with the Defense Department on February 27, 2026, saying the company would make its systems available inside secure Pentagon computing environments.

3.

Kalinowski said policy guardrails were not sufficiently defined and warned against surveillance of Americans without judicial oversight and lethal autonomy without human authorization.

4.

OpenAI said the agreement creates a workable path for responsible national security uses while reiterating red lines of no domestic surveillance and no autonomous weapons, and Kalinowski joined OpenAI in November 2024.

5.

Kalinowski said she will remain focused on building responsible physical AI, and OpenAI said it will continue engaging with employees, government, civil society and communities around the world.

Written using shared reports from
5 sources
.
Report issue

Analysis

Compare how each side frames the story — including which facts they emphasize or leave out.

Center-leaning sources frame OpenAI's Pentagon deal as controversial and reputationally costly by foregrounding a principled resignation, using evaluative terms like "controversial" and "rushed," and highlighting consumer-backlash metrics. Editorial choices prioritize dissent and consequences while treating company defenses as source content; quoted rebuttals are included but downplayed by placement and emphasis on damage.

FAQ

Dig deeper on this story with frequently asked questions.

Caitlin Kalinowski was the hardware leader at OpenAI, joining in November 2024, and resigned on March 7, 2026.

She resigned citing a rushed Pentagon agreement and insufficient policy guardrails on surveillance of Americans without judicial oversight and lethal autonomy without human authorization.

The agreement prohibits domestic mass surveillance, requires human responsibility for the use of force including autonomous weapons, and uses cloud deployment with OpenAI personnel in the loop.

After negotiations with Anthropic fell through and it was designated a supply-chain risk, OpenAI quickly announced its deal, including ethical guardrails that Anthropic sought but the Pentagon refused.

OpenAI states the agreement provides strong multi-layered protections beyond usage policies, retains control over safety, and commits to engaging employees, government, and civil society.