The internal fallout from OpenAI’s contract with the US Department of Defense continues to unfold. Caitlin Kalinowski, who oversaw hardware within the company’s robotics division, announced on March 8, 2026 on Platform X that she was leaving OpenAI, directly tying her decision to the way the deal was made.
Kalinowski, who before joining OpenAI at the end of 2024 worked at Meta, did not mince words. In her post, she wrote that “surveillance of Americans without judicial oversight and lethal autonomy without human authorization are frontiers that deserve more thought than they have received.” In response to another comment, the former executive added that “the announcement was rushed, without defined protective measures,” qualifying the whole matter primarily as a matter of company management.
OpenAI confirmed her resignation and said in a statement to Engadget that the company understands that people have “strong views” on these issues, and that it will continue to hold discussions with relevant parties. In the same statement, the company denied the substance of Kalinowski’s objections, arguing that its agreement with the Pentagon “creates an operational pathway for the responsible use of artificial intelligence for national security purposes,” with clearly defined red lines: no domestic surveillance and no autonomous weapons. OpenAI also said there are no plans to name Kalinowski’s replacement.
The resignation comes as a continuation of a series of upheavals the contract has caused both inside and outside the company. Let’s recall that it all started when Anthropic rejected the US government’s requests to use its Claude artificial intelligence in mass surveillance of citizens and autonomous weapons, after which OpenAI took on the role of supplier. The public reaction was swift and harsh, followed by mass cancellations of ChatGPT subscriptions. Even CEO Sam Altman himself was forced to publicly state that the contract with the Department of Defense would be amended to explicitly prohibit spying on Americans, which in itself speaks to how sketchy the original version of the agreement was.
Kalinowski’s resignation is especially heavy symbolically because it comes from the heart of the company, not from outside critics or users. It’s not just an ethical objection, it’s also a management alarm: If executives working directly on a company’s most sensitive technologies feel they’ve been bypassed at a crucial moment, the question is how well OpenAI is really able to deliver on the promises it makes to regulators and the public about the responsible use of artificial intelligence.