A recent court filing reveals a sharp contradiction in the Pentagon’s justification for designating AI company Anthropic as a national security risk. Despite publicly cutting ties with Anthropic over concerns about its technology, internal communications show the Defense Department believed the two sides were “very close” to alignment just days before finalizing the designation. This discrepancy raises questions about whether the move was based on genuine security concerns or political leverage.
Key Findings from Court Declarations
Anthropic has submitted sworn declarations from two key executives, Sarah Heck (Head of Policy) and Thiyagu Ramasamy (Head of Public Sector), challenging the Pentagon’s claims. The filings, made ahead of a court hearing on March 24, assert that the government’s case relies on misunderstandings and accusations never raised during prior negotiations.
- No Demand for Operational Control: According to Heck, Anthropic never sought approval over military operations, a central claim in the government’s filings.
- Unraised Concerns: The Pentagon’s concern about Anthropic potentially disabling or altering its technology mid-operation was not discussed during negotiations but surfaced only in court filings, leaving Anthropic no chance to respond.
- Contradictory Signals: An email from Under Secretary Emil Michael to Anthropic CEO Dario Amodei on March 4 indicated the two sides were nearly aligned on key issues, despite public statements from Michael the following days denying any active negotiations.
- Technical Limitations: Ramasamy, an expert in AI deployments for government customers, states that once Anthropic’s AI models are deployed in secure environments, the company has no remote access or control, debunking claims of a “kill switch” or backdoor.
The Timeline of Events
The dispute escalated in late February when President Trump and Defense Secretary Pete Hegseth publicly announced the end of ties with Anthropic after the company refused unrestricted military use of its AI. However, internal communications suggest a different narrative. Just one day after the Pentagon finalized its supply-chain risk designation against Anthropic, Under Secretary Michael emailed Amodei to say the two sides were “very close” on the two issues the government now cites as evidence that Anthropic is a national security threat: its positions on autonomous weapons and mass surveillance of Americans.
Legal Implications
Anthropic’s lawsuit argues that the supply-chain risk designation – the first ever applied to an American company – amounts to government retaliation for the company’s publicly stated views on AI safety, in violation of the First Amendment. The case highlights growing tensions between the government and AI developers over control and ethical considerations in the rapidly evolving field of artificial intelligence.
The Pentagon’s actions suggest a willingness to use regulatory pressure to force compliance from AI companies, raising broader questions about the balance between national security and free speech in the digital age.




















