San Francisco, CA | October 13-15, 2026
The burgeoning rivalry between leading artificial intelligence companies Anthropic and OpenAI has escalated into a public dispute, centered on a recent agreement between OpenAI and the U.S. Department of Defense (DoD). Dario Amodei, co-founder and CEO of Anthropic, has sharply criticized OpenAI’s partnership with the military, labeling it as "safety theater" in a leaked internal memo to his staff. The memo, first reported by The Information, reveals Amodei’s strong disapproval of OpenAI’s decision to engage with the DoD under terms Anthropic deemed ethically problematic.
"The main reason [OpenAI] accepted [the DoD’s deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses," Amodei wrote in the memo, drawing a stark contrast between the two companies’ priorities. This statement underscores a fundamental divergence in how these AI giants approach the ethical implications of their technology, particularly when it intersects with national security and defense applications.
The controversy stems from the DoD’s pursuit of access to advanced AI technologies for its operations. Last week, Anthropic, despite having an existing $200 million contract with the military, failed to reach a new agreement. The AI safety-focused company insisted on explicit assurances from the DoD that its technology would not be employed for domestic mass surveillance or the development of autonomous weaponry. Anthropic’s stance reflects its core mission to develop AI systems that are helpful, honest, and harmless, prioritizing safeguards against potential misuse.
In contrast, the DoD, reportedly referred to as the Department of War under the Trump administration, swiftly secured a deal with OpenAI. Sam Altman, CEO of OpenAI, announced the partnership, asserting that it would incorporate safeguards against the very red lines that Anthropic had drawn. Altman stated in a public post that his company’s new defense contract would include protections to prevent the misuse of its AI for purposes like domestic surveillance or autonomous weapons.
However, Amodei vehemently disputed these claims in his internal communication. He referred to OpenAI’s public messaging as "straight up lies" and accused Altman of falsely presenting himself as a "peacemaker and dealmaker." This direct challenge suggests that Amodei believes OpenAI’s assurances regarding the DoD contract are disingenuous and that the company may be compromising its ethical principles for the sake of a lucrative military partnership.
The crux of the disagreement lies in the interpretation of "lawful use." Anthropic specifically took issue with the DoD’s insistence on the company’s AI being available for "any lawful use." In a statement released by Anthropic, the company elaborated on its concerns, highlighting the ambiguity and potential for broad interpretation inherent in such a broad clause. OpenAI, in its own blog post detailing the agreement, stated that its contract also allows for the use of its AI systems for "all lawful purposes." However, OpenAI further clarified that, "It was clear in our interaction that the DoW considers mass domestic surveillance illegal and was not planning to use it for this purpose. We ensured that the fact that it is not covered under lawful use was made explicit in our contract."
This distinction, however, has been met with skepticism. Critics have pointed out that legal definitions and statutes are subject to change, particularly in the rapidly evolving landscape of technology and national security. What might be considered illegal today could potentially be redefined as permissible in the future, leaving a loophole for potential misuse. This raises concerns about the long-term efficacy of such contractual stipulations in preventing the weaponization or surveillance applications of advanced AI.
The public reaction to the diverging stances appears to be leaning in Anthropic’s favor. Following OpenAI’s announcement of the DoD deal, there was a significant surge in ChatGPT uninstalls, reportedly jumping by 295%. This spike suggests a public backlash against OpenAI’s decision and a potential erosion of trust among its user base. Conversely, Amodei noted that Anthropic saw an increase in its own app store rankings, reaching #2.
"I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with the DoW as sketchy or suspicious, and see us as the heroes (we’re #2 in the App Store now!)," Amodei wrote in his memo. He expressed concern that while the public and media might be seeing through OpenAI’s narrative, his primary worry is whether these reassurances will hold sway with OpenAI employees themselves. This suggests a strategic effort by Anthropic to influence internal sentiment within its competitor, potentially aiming to foster dissent or ethical reevaluation among OpenAI’s workforce.
A Timeline of Escalating Tensions
The disagreement between Anthropic and OpenAI over defense contracts did not emerge overnight. It is part of a broader narrative of how AI companies are navigating the complex ethical terrain of military applications.
Recent Past (Indicative Timeline):
- Early 2024: Discussions between AI companies and the Department of Defense regarding potential collaborations and technology access likely intensified as military entities sought to leverage advanced AI capabilities.
- Mid-2024: Anthropic and the DoD engage in negotiations for continued and expanded access to Anthropic’s AI technologies, building upon an existing $200 million contract. During these discussions, Anthropic raises specific ethical red lines concerning domestic surveillance and autonomous weapons.
- Late 2024: Negotiations between Anthropic and the DoD reach an impasse. Anthropic declines to proceed with an agreement without explicit contractual guarantees against the aforementioned uses.
- Simultaneously (Late 2024): OpenAI enters into negotiations with the DoD. Reports suggest these discussions were more amenable to the DoD’s terms regarding broad access.
- Early 2025: OpenAI announces its agreement with the Department of Defense. Sam Altman publicly shares details, emphasizing the inclusion of safeguards.
- Following OpenAI’s Announcement: Dario Amodei, CEO of Anthropic, circulates an internal memo to his staff, criticizing OpenAI’s deal as "safety theater" and accusing OpenAI of prioritizing employee placation over genuine ethical concerns. The memo is subsequently leaked and reported by The Information.
- Post-Leak: Public reaction begins to surface, with reports of increased ChatGPT uninstalls. Anthropic publicly reinforces its commitment to ethical AI development and its refusal to compromise on its safety principles.
The Ethical Chasm in AI Development
The core of this dispute highlights a fundamental philosophical difference in the approach to Artificial Intelligence development, particularly concerning its dual-use potential. Anthropic, founded by former OpenAI researchers, has consistently positioned itself as a leader in AI safety and ethics. Their explicit mission is to ensure that AI systems are developed and deployed in a manner that benefits humanity and mitigates existential risks. This commitment translates into a rigorous approach to partnerships, especially with entities that have the capacity for significant destructive or intrusive applications.
OpenAI, while also expressing a commitment to AI safety, appears to operate with a more pragmatic, and some might argue, more commercially driven strategy. The company’s history includes a shift from its initial non-profit status to a capped-profit model, which has led to increased pressure to secure substantial revenue streams. The DoD contract represents a significant financial and strategic opportunity, and OpenAI’s willingness to engage, even with perceived ethical compromises, can be seen through this lens.
The DoD’s perspective is understandably focused on national security and technological superiority. In a rapidly evolving geopolitical landscape, the military seeks to harness the most advanced technologies to maintain a strategic advantage. The "any lawful use" clause, while seemingly broad, reflects a desire for flexibility and adaptability in military applications, a stance that clashes directly with Anthropic’s demand for rigid ethical boundaries.
Supporting Data and Public Sentiment
The reported surge in ChatGPT uninstalls (295%) following OpenAI’s DoD deal serves as a quantifiable indicator of public disapproval. This data point suggests that a significant segment of the user base perceives OpenAI’s actions as a betrayal of its stated ethical principles or a move towards militarization that they find objectionable. The fact that Anthropic’s app climbed to #2 in the App Store following this controversy further supports the notion that public sentiment is aligning with Anthropic’s ethical stance.
This phenomenon is not isolated. Public discourse surrounding AI often grapples with the potential for autonomous weapons systems, the erosion of privacy through advanced surveillance technologies, and the concentration of powerful AI capabilities in the hands of a few entities. The OpenAI-DoD deal appears to have tapped into these existing public anxieties, amplifying concerns about the direction of AI development and its potential societal impact.
Broader Implications and Future Outlook
The conflict between Anthropic and OpenAI over the DoD contract has far-reaching implications for the future of AI governance, regulation, and public trust.
- AI Safety Standards: The dispute could push for more standardized and legally binding AI safety protocols, particularly for defense applications. If companies cannot agree on ethical frameworks, external regulatory bodies may step in to define them.
- Competitive Landscape: This incident further solidifies the perception of a bifurcated AI industry – one focused on uncompromising safety and another prioritizing rapid deployment and commercialization. This could influence investment trends and talent acquisition in the AI sector.
- Public Perception and Trust: The ongoing debate shapes how the public views AI companies and their role in society. A loss of public trust can have significant consequences for user adoption, regulatory scrutiny, and the overall acceptance of AI technologies.
- Government Procurement: The DoD’s procurement process for AI technologies may come under increased scrutiny. Policymakers might need to develop clearer guidelines and oversight mechanisms to ensure that AI acquisitions align with ethical and societal values.
The statements from both Amodei and Altman, though representing opposing viewpoints, reflect a shared awareness of the significant public and media attention surrounding these developments. Amodei’s specific concern about influencing OpenAI employees suggests a recognition of the internal ethical debates that likely occur within such organizations when faced with high-stakes decisions.
As the field of artificial intelligence continues its exponential growth, the ethical considerations surrounding its development and deployment will become increasingly critical. The public disagreement between Anthropic and OpenAI serves as a salient reminder of the complex challenges involved in balancing technological innovation with fundamental human values and safety. The outcome of this debate could set precedents for how AI companies interact with governments and how society navigates the profound implications of advanced artificial intelligence.
