The landscape of AI application usage in the United States experienced a significant upheaval over the weekend of February 27-28, 2026, marked by a dramatic surge in uninstalls for OpenAI’s ChatGPT mobile application and a corresponding spike in downloads for its competitor, Anthropic’s Claude. These shifts in user behavior directly correlate with the public announcement of OpenAI’s substantial partnership with the Department of Defense (DoD), now rebranded as the Department of War under the Trump administration, and Anthropic’s subsequent decision to eschew such a collaboration.
Data meticulously gathered by market intelligence provider Sensor Tower reveals a staggering 295% day-over-day increase in U.S. app uninstalls for ChatGPT on Saturday, February 28th. This represents a stark deviation from the app’s typical daily uninstall rate, which had averaged a modest 9% over the preceding thirty days. The timing of this surge leaves little room for doubt: consumers were reacting swiftly and decisively to the news of OpenAI’s lucrative agreement with the nation’s defense apparatus.
Conversely, Anthropic’s AI chatbot, Claude, witnessed a substantial uptick in user interest. U.S. downloads for Claude climbed by 37% on Friday, February 27th, and further accelerated to a 51% increase by Saturday, February 28th. This surge followed Anthropic’s public declaration that it would not be entering into a partnership with the U.S. defense department. The company articulated its rationale, citing an inability to reconcile deal terms due to profound concerns that AI technologies could be weaponized for domestic surveillance of American citizens or deployed in fully autonomous weaponry, a domain Anthropic believes AI is not yet safely equipped to handle. The data from Sensor Tower strongly suggests that a significant segment of the consumer base found Anthropic’s ethical stance more aligned with their values.
The fallout for OpenAI extended beyond just uninstalls. The news of its Department of War partnership also negatively impacted ChatGPT’s download trajectory. U.S. downloads for the ChatGPT app saw a decline of 13% day-over-day on Saturday, immediately following the public disclosure of the deal. This downward trend continued into Sunday, with downloads falling an additional 5% compared to the previous day. This represents a sharp reversal from the app’s performance prior to the announcement, when it had enjoyed a healthy 14% day-over-day download growth on Friday.
Claude’s Ascent to Prominence
The user exodus from ChatGPT and the influx towards Claude were not merely abstract metrics; they were visibly reflected in the application stores. Claude’s position on the U.S. App Store experienced a meteoric rise, achieving the coveted No. 1 spot on Saturday, February 28th. As of Monday, March 2nd, it maintained this leading position. This represents a significant climb of over 20 ranks compared to its standing approximately a week prior, on February 22nd, 2026.
The public sentiment was further amplified in the app’s rating sections. Sensor Tower reported a dramatic 775% surge in 1-star reviews for ChatGPT on Saturday, which then doubled again, increasing by another 100% day-over-day on Sunday. Concurrently, the app saw a 50% decline in 5-star reviews during the same period, indicating widespread user dissatisfaction.
Independent Verification of Market Shifts
The findings from Sensor Tower were corroborated by other independent market intelligence firms, providing a comprehensive view of the AI app market’s dramatic reorientation. Appfigures, another prominent data analytics company, noted that for the first time, Claude’s total daily U.S. downloads on Saturday surpassed those of ChatGPT. Appfigures’ estimates for Claude’s download growth on Saturday were even more pronounced than Sensor Tower’s, placing the day-over-day increase at a remarkable 88%.
This surge in popularity has propelled Claude to the forefront of mobile application charts globally. Beyond its dominance in the U.S., Claude has secured the No. 1 free iPhone app ranking in six additional countries, including Belgium, Canada, Germany, Luxembourg, Norway, and Switzerland.
Similarweb, a third market intelligence provider, also observed a significant increase in Claude’s U.S. downloads over the past week, estimating them to be approximately 20 times higher than those recorded in January. While Similarweb cautions that this exponential growth might be influenced by factors beyond the recent political controversies, the timing strongly suggests a direct correlation.

The Genesis of the Controversy: OpenAI’s Department of War Deal
The catalyst for these user reactions was the announcement of OpenAI’s collaboration with the U.S. Department of Defense. While the specific details of the agreement remain under wraps, reports indicate a significant financial commitment from the DoD to OpenAI. This partnership, framed by the administration as a critical step in modernizing national defense capabilities through advanced artificial intelligence, has ignited a fierce debate about the ethical implications of AI in warfare and surveillance.
The rebranding of the Department of Defense to the Department of War by the Trump administration itself has added a layer of symbolic weight to these developments. Critics argue that this nomenclature change signals a more aggressive and potentially less transparent approach to military technology acquisition and deployment, raising alarm bells among privacy advocates and those concerned about the future of AI ethics.
Anthropic’s Principled Stand
In stark contrast to OpenAI’s decision, Anthropic, a leading AI research company with a strong emphasis on AI safety and ethics, publicly declared its refusal to partner with the U.S. defense department. This decision, articulated by Anthropic’s leadership, stemmed from a principled commitment to ensuring that AI technologies are developed and deployed responsibly.
The company’s statement highlighted specific concerns regarding the potential misuse of AI for mass surveillance and the development of autonomous weapons systems. Anthropic’s stance, which prioritizes human oversight and ethical considerations over potentially lucrative defense contracts, has resonated deeply with a segment of the public concerned about the unchecked advancement of AI capabilities. This principled position has not only garnered positive public opinion but has also translated into tangible gains in market share for their AI product.
Broader Implications and Future Outlook
The events of late February 2026 underscore a growing public awareness and sensitivity regarding the ethical dimensions of artificial intelligence. As AI technologies become more integrated into everyday life and increasingly find applications in critical sectors like national security, consumer choices are becoming more informed and values-driven.
The dramatic user shift suggests that a significant portion of the public is actively seeking AI solutions that align with their ethical frameworks, particularly concerning privacy, autonomy, and the potential for misuse. This presents a critical juncture for AI development companies. Those that can demonstrably prioritize safety, transparency, and ethical considerations are likely to gain favor and market share, while those perceived as compromising these principles may face significant backlash.
The long-term implications of this trend are substantial. It signals a potential shift in the power dynamics between AI developers, government entities, and the public. Consumer sentiment, amplified by accessible data and rapid information dissemination, is proving to be a potent force in shaping the trajectory of AI adoption. Companies like OpenAI may need to re-evaluate their strategies to address public concerns and demonstrate a commitment to responsible AI development, especially when engaging with sensitive government contracts.
Furthermore, the success of Anthropic’s Claude serves as a compelling case study for the industry. By articulating a clear ethical stance and aligning product development with those principles, the company has not only navigated a complex geopolitical landscape but has also emerged as a significant player in the AI market. This suggests that a focus on ethical AI is not merely a matter of corporate social responsibility but can also be a significant competitive advantage.
The performance of these AI applications on the global stage also indicates that these ethical considerations are not confined to the U.S. market. As AI continues its global proliferation, the debate surrounding its responsible development and deployment will undoubtedly intensify. The user reactions observed over the past weekend may be an early indicator of a broader, global trend towards demanding greater accountability and ethical clarity from the companies shaping the future of artificial intelligence. The coming months will likely reveal whether this user-driven shift will lead to lasting changes in how AI is developed, regulated, and adopted across various sectors.
