Anthropic has formally challenged the Department of Defense’s assertion that the artificial intelligence company poses an "unacceptable risk to national security," submitting two sworn declarations to a California federal court. These filings, made late Friday afternoon, aim to counter the Pentagon’s narrative, which Anthropic argues is based on technical misunderstandings and claims that were never raised during extensive prior negotiations. The declarations are part of Anthropic’s ongoing lawsuit against the Department of Defense and precede a crucial hearing scheduled for Tuesday, March 24, before Judge Rita Lin in San Francisco.
The legal battle stems from a public declaration by President Trump and Defense Secretary Pete Hegseth in late February, announcing a severing of ties with Anthropic. This action followed the company’s refusal to grant unrestricted military access to its AI technology, a decision that ignited a significant dispute between the AI developer and a key government contractor.
Key Declarations and Contentions
The two individuals who provided sworn statements are Sarah Heck, Anthropic’s Head of Policy, and Thiyagu Ramasamy, the company’s Head of Public Sector. Their declarations offer detailed accounts intended to refute the government’s justifications for the supply-chain risk designation.
Sarah Heck’s Account: Challenging Misrepresentations
Sarah Heck, a former National Security Council official who served in the White House during the Obama administration before transitioning to roles at Stripe and then Anthropic, directly addresses what she identifies as a fundamental misrepresentation in the government’s legal filings. She asserts that the claim that Anthropic sought an "approval role over military operations" is entirely false. "At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role," Heck stated in her declaration.
Heck also highlights that the Pentagon’s concern regarding Anthropic’s potential to disable or alter its technology mid-operation was never a point of discussion during their months of negotiation. This particular concern, she contends, surfaced for the first time within the government’s court filings, leaving Anthropic with no prior opportunity to address it.
A particularly striking detail within Heck’s declaration pertains to a communication that occurred on March 4. This was precisely one day after the Pentagon formally finalized its supply-chain risk designation against Anthropic. On that day, Under Secretary Emil Michael sent an email to Anthropic CEO Dario Amodei, stating that the two parties were "very close" to resolving two key issues that the government now cites as evidence of Anthropic’s national security risk: the company’s stance on autonomous weapons and mass surveillance of American citizens.
This email, attached as an exhibit to Heck’s declaration, provides a stark contrast to public statements made by government officials in the ensuing days. On March 5, Amodei issued a statement describing the company’s discussions with the Pentagon as "productive conversations." However, the following day, Under Secretary Michael posted on X (formerly Twitter) that "there is no active Department of War negotiation with Anthropic." A week later, Michael reiterated to CNBC that there was "no chance" of renewed talks.
Heck’s implication is clear: if Anthropic’s positions on autonomous weapons and mass surveillance were indeed the primary drivers of its designation as a national security threat, then the Pentagon’s own official, Under Secretary Michael, acknowledged near-alignment on these very issues immediately after the designation was finalized. While Heck stops short of directly accusing the government of using the designation as a bargaining chip, the timeline she presents strongly suggests this possibility, leaving the question of retaliatory action open for interpretation.
Thiyagu Ramasamy’s Technical Rebuttal
Thiyagu Ramasamy brings a wealth of experience in AI deployment for government clients, including classified environments, from his six-year tenure at Amazon Web Services before joining Anthropic in 2025. At Anthropic, he has been instrumental in building the team that has integrated the company’s Claude models into national security and defense sectors, notably overseeing the $200 million contract with the Pentagon announced last summer.
Ramasamy’s declaration directly challenges the government’s assertion that Anthropic could theoretically interfere with military operations by disabling its technology or altering its behavior. He argues that such a scenario is technically infeasible. According to Ramasamy, once the Claude model is deployed within a government-secured, "air-gapped" system managed by a third-party contractor, Anthropic relinquishes all access. There is no remote kill switch, no backdoor, and no mechanism for pushing unauthorized updates. He describes any notion of an "operational veto" as a fiction, explaining that any modification to the AI model would necessitate explicit approval and action by the Pentagon for installation.
Furthermore, Ramasamy states that Anthropic has no visibility into the data that government users input into the system, nor does it possess any capability to extract such data. This directly counters potential concerns about data privacy and unauthorized access to sensitive military information.
Ramasamy also addresses the government’s claim that Anthropic’s hiring of foreign nationals constitutes a security risk. He points out that all Anthropic employees working on sensitive projects have undergone U.S. government security clearance vetting, the same rigorous background check process required for access to classified information. He further emphasizes in his declaration that, "to my knowledge," Anthropic is unique among AI companies in that its cleared personnel are directly involved in building the AI models intended for use in classified environments.
The Legal Framework and Background
Anthropic’s lawsuit contends that the supply-chain risk designation, which is reportedly the first of its kind applied to an American company, constitutes government retaliation for the company’s public stances on AI safety. The company argues that this action violates its First Amendment rights.
The Department of Defense, in a comprehensive 40-page filing submitted earlier in the week, has vehemently rejected this characterization. The Pentagon maintains that Anthropic’s refusal to permit all lawful military uses of its technology was a business decision, not an exercise of protected speech. The government asserts that the designation was a straightforward national security determination, devoid of any punitive intent or connection to the company’s expressed views on AI safety.
Chronology of the Dispute
To understand the current legal confrontation, a chronological overview of key events is essential:
- Late 2023 – Early 2024: Ongoing discussions and negotiations between Anthropic and the Department of Defense regarding the deployment and usage terms of Anthropic’s AI technology for military applications. These discussions reportedly covered areas such as autonomous weapons systems and data surveillance protocols.
- February 24, 2026: President Trump and Defense Secretary Pete Hegseth publicly announce that the U.S. government is cutting ties with Anthropic. The stated reason is the company’s refusal to allow unrestricted military use of its AI. This public announcement marks a significant escalation of the dispute.
- March 4, 2026: Under Secretary Emil Michael of the Department of Defense emails Anthropic CEO Dario Amodei, indicating that the two parties are "very close" to resolving key disagreements concerning autonomous weapons and mass surveillance – issues later cited by the government as national security concerns.
- March 5, 2026: Anthropic CEO Dario Amodei publishes a statement describing the company’s conversations with the Pentagon as "productive."
- March 6, 2026: Under Secretary Emil Michael posts on X, stating, "there is no active Department of War negotiation with Anthropic."
- Late March 2026 (approx. March 11-13): Under Secretary Michael tells CNBC that there is "no chance" of renewed talks with Anthropic.
- March 18, 2026: The Department of Defense files a 40-page response to Anthropic’s lawsuit, detailing its rationale for the supply-chain risk designation and rejecting the company’s First Amendment claims.
- Late March 21, 2026 (Friday afternoon): Anthropic submits two sworn declarations from Sarah Heck and Thiyagu Ramasamy to the California federal court, directly contesting the Pentagon’s national security claims and providing technical and procedural counterarguments.
- March 24, 2026: A hearing is scheduled before Judge Rita Lin in San Francisco to address the ongoing dispute.
Broader Implications and Analysis
The legal battle between Anthropic and the Department of Defense carries significant implications for the burgeoning field of AI development and its integration into national security apparatus.
National Security vs. AI Ethics: The core of the dispute highlights the inherent tension between the imperative for national security and the ethical considerations surrounding advanced AI. Anthropic, like many AI companies, has articulated principles regarding the responsible development and deployment of its technology, particularly concerning lethal autonomous weapons systems and mass surveillance. The Pentagon, on the other hand, operates under a mandate to maintain military superiority and protect national interests, which can necessitate broad access and control over deployed technologies.
The Precedent of Supply-Chain Risk Designation: The fact that the Pentagon has applied a supply-chain risk designation to an American company, rather than a foreign one, sets a new precedent. This move could signal a more stringent approach by the U.S. government to vetting AI vendors and ensuring their technologies align with national security objectives, potentially impacting future contract awards and partnerships.
Technical Feasibility and Control: Ramasamy’s declarations offer a technical counterpoint to the government’s perceived security risks. His assertions about the lack of remote access, kill switches, or unauthorized update mechanisms challenge the premise that Anthropic could unilaterally disrupt military operations. If substantiated, these technical details could undermine the government’s specific claims of operational risk.
First Amendment and Government Retaliation: Anthropic’s lawsuit, which frames the designation as First Amendment retaliation, raises crucial questions about the government’s ability to penalize companies for their public statements on AI ethics. If the court sides with Anthropic, it could establish stronger protections for companies expressing concerns about the societal impact of their technologies, even when those concerns conflict with immediate government objectives.
The Role of Negotiation and Transparency: The discrepancy between the Under Secretary’s email on March 4 and his subsequent public statements and filings is a focal point of Anthropic’s argument. It raises questions about the transparency and good faith of the negotiation process, suggesting that the government may have shifted its narrative or strategy after the designation was already in place. This aspect could be critical in the court’s assessment of the government’s motivations.
The upcoming hearing before Judge Lin is expected to be pivotal in determining the immediate future of this high-stakes legal contest. The court’s ruling could shape the landscape of AI development, government contracting, and the ongoing dialogue surrounding responsible AI deployment in critical sectors. The case underscores the complex challenges of balancing innovation, national security, and ethical considerations in the rapidly evolving domain of artificial intelligence.
