Sam Altman, chief executive officer of OpenAI Inc., acknowledged Monday that the company "shouldn’t have rushed" its recent contractual agreement with the U.S. Department of Defense (DoD), announcing significant revisions to the deal to incorporate explicit prohibitions against domestic surveillance and use by intelligence agencies. The move comes in the wake of a tumultuous weekend marked by widespread public backlash and a reported exodus of users from OpenAI’s flagship ChatGPT platform, underscoring the delicate balance between technological innovation, national security, and public trust.
Altman, visible during a media tour of the Stargate AI data center in Abilene, Texas, on September 23, 2025, which symbolizes the escalating scale of AI infrastructure, shared what he described as a repost of an internal memo on X. In his public statement, he outlined crucial amendments to the contract, introducing new language designed to align with OpenAI’s stated principles on critical issues such as surveillance and ethical AI deployment. These revisions are a direct response to the swift and often scathing criticism leveled against OpenAI following the initial announcement of the DoD partnership.
Defining "Red Lines": New Contractual Safeguards
Central to the revised agreement is a clear stipulation that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals." This clause is further buttressed by a departmental understanding that "the Department understands the limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." This language aims to address profound concerns from civil liberties advocates and a segment of the public wary of the potential for powerful AI tools to be weaponized against citizens.
Furthermore, Altman stated that the Department of Defense had explicitly affirmed that OpenAI’s advanced tools would not be utilized by U.S. intelligence agencies, including the National Security Agency (NSA). This commitment is critical, as the NSA’s broad surveillance capabilities have long been a subject of intense debate and legal scrutiny. "There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety," Altman conceded, indicating a commitment to collaborate with the Pentagon on developing robust technical safeguards to prevent misuse and ensure responsible deployment. This acknowledgment highlights the nascent stage of AI integration into sensitive government operations and the ongoing learning curve for both developers and users.
A Weekend of Controversy: The Rushed Deal and Its Fallout
The rapid sequence of events leading to Altman’s mea culpa began on Friday, when the ChatGPT maker unveiled its new deal with the Defense Department. This announcement was strategically timed, occurring just hours after U.S. President Donald Trump issued a directive instructing federal agencies to cease using tools developed by OpenAI’s chief rival, Anthropic. Adding another layer of geopolitical tension, the deal was publicized hours before Washington initiated strikes on Iran, further fueling public speculation about the timing and motivations behind the partnership.
Altman’s admission that he "shouldn’t have rushed" the deal out on Friday underscores the intense pressure and complex geopolitical backdrop against which AI companies now operate. "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy," he articulated in his post. This candid assessment reflects a recognition of the significant reputational damage incurred by the company over the weekend, as well as the perception that OpenAI might have exploited Anthropic’s recent troubles for its own gain. Reports indicated a substantial surge in ChatGPT uninstalls, with some analyses suggesting a 295% increase in users ditching the platform for alternatives like Anthropic’s Claude on app stores, signaling a direct correlation between public trust and corporate ethics in the rapidly evolving AI landscape.
The Anthropic Precedent: A Tale of Two AI Labs
OpenAI’s deal with the Pentagon did not unfold in a vacuum; it was cast against the dramatic backdrop of a public feud between Anthropic and Washington, which culminated in the collapse of their partnership. Anthropic, founded in 2021 by a group of former OpenAI staff and researchers, including Dario Amodei, who departed due to disagreements over OpenAI’s strategic direction, has consistently marketed itself as a "safety-first" alternative. This philosophical divergence has shaped its approach to military engagement.

Following an initial agreement last year, Anthropic became the first AI lab to successfully deploy its models across the Defense Department’s classified network, marking a significant milestone in government-AI collaboration. However, the relationship began to fray after it was revealed that Anthropic’s Claude AI system had been utilized by the U.S. military in a January raid to capture Venezuelan president Nicolás Maduro. While Anthropic did not publicly object to this specific use case, the incident seemingly intensified its resolve to establish clear ethical boundaries for its technology.
Subsequently, Anthropic sought explicit guarantees from the DoD that its tools would not be deployed for purposes such as domestic surveillance within the U.S., nor for the operation and development of autonomous weapons systems without direct human control. These "red lines," as they came to be known, mirrored concerns widely shared by AI ethicists and civil society organizations globally. Despite these efforts, talks between Anthropic and the Defense Department ultimately broke down. Defense Secretary Pete Hegseth announced on Friday that Anthropic would be designated a "supply-chain threat," a severe classification typically reserved for entities posing national security risks, effectively sidelining the company from future government contracts.
Inconsistencies and Criticisms: DoD’s Shifting Stance
The timing and terms of OpenAI’s agreement, especially given Altman’s prior assurance to employees in a Thursday memo that OpenAI shared the same "red lines" as Anthropic, raised immediate questions about the DoD’s seemingly inconsistent approach. While government officials had reportedly criticized Anthropic for months for allegedly being "overly concerned with AI safety," the Pentagon appeared to accommodate OpenAI’s restrictions almost immediately. This disparity has fueled speculation and calls for greater transparency regarding the DoD’s AI procurement policies and its criteria for evaluating ethical safeguards.
Observers noted the stark contrast: Anthropic, which had sought to enshrine strong ethical constraints, was cast aside as a security risk, while OpenAI, after initially rushing a deal, was granted similar terms following public pressure. This raises critical questions about whether the DoD’s stance is genuinely evolving towards stricter ethical integration or if it is selectively applied based on corporate leverage and political expediency. The Department of Defense has yet to issue a comprehensive public statement clarifying its rationale behind these differing outcomes, leaving many to infer that the competitive dynamics of the AI industry and the urgency of national security priorities may have played a significant role.
Broader Implications: Trust, Ethics, and the Future of AI Governance
The weekend’s events have far-reaching implications for the burgeoning AI industry, government-tech relations, and the future of AI governance. The public’s swift reaction to OpenAI’s initial deal, manifested in the reported surge of ChatGPT uninstalls, serves as a powerful reminder that public trust is a fragile commodity, particularly when advanced technologies intersect with sensitive issues like national security and individual liberties. This "trust deficit" can have tangible commercial consequences, forcing even industry leaders to re-evaluate their strategies.
Altman’s unexpected advocacy for Anthropic in his Monday statement — "In my conversations over the weekend, I reiterated that Anthropic should not be designated as a [supply chain risk], and that we hope the [Department of Defense] offers them the same terms we’ve agreed to" — suggests a recognition of the broader ethical responsibilities shared across the AI development community. This gesture, while potentially strategic, also highlights the interconnectedness of major AI players and the collective interest in maintaining a baseline of ethical conduct to prevent a race to the bottom.
From a regulatory perspective, this episode underscores the urgent need for clearer, more standardized frameworks for the ethical deployment of AI in military and intelligence contexts. The ambiguity surrounding "red lines," the definition of "responsible AI," and the enforcement mechanisms for preventing misuse demand robust policy solutions. As AI capabilities continue to advance, the potential for dual-use technologies — those that can serve both beneficial and harmful purposes — will only intensify, making the establishment of clear guardrails paramount.
Moreover, the events signal a potential shift in the power dynamics between AI developers and government agencies. While governments are eager to leverage cutting-edge AI for strategic advantages, the public and, increasingly, the developers themselves, are demanding accountability and ethical oversight. This dynamic could lead to more collaborative efforts in defining ethical AI standards, but also potentially to greater friction if these standards are perceived as overly restrictive or inconsistent.
The saga between OpenAI, Anthropic, and the Pentagon is a critical case study in the complex ethical, political, and commercial challenges inherent in integrating powerful AI into the fabric of national security. It emphasizes that transparency, clear communication, and a genuine commitment to ethical principles are not merely corporate buzzwords but essential pillars for fostering public trust and ensuring the responsible advancement of artificial intelligence for the betterment of society. The revisions announced by OpenAI may represent a step towards regaining that trust, but the broader debate over AI’s role in defense and surveillance is far from over.
