In a dramatic and swift move that reverberated through the technology sector and Washington D.C., the Trump administration announced Friday afternoon its decision to sever ties with Anthropic, a prominent San Francisco-based artificial intelligence company. Defense Secretary Pete Hegseth invoked a national security law to blacklist the company, a decision reportedly stemming from Anthropic’s refusal to permit its advanced AI technologies from being deployed for mass surveillance of U.S. citizens or for autonomous armed drones capable of lethal targeting without human oversight.
The abrupt action carries significant financial and strategic implications for Anthropic, a company founded in 2021 by Dario Amodei. The severing of ties could result in the forfeiture of a contract valued at up to $200 million and potentially bar the company from engaging with other defense contractors. This move was further amplified when President Trump issued a directive via Truth Social, ordering every federal agency to "immediately cease all use of Anthropic technology." In response, Anthropic has publicly stated its intention to challenge the Pentagon’s decision in court.
A Decade of Warnings Culminate in a Crisis
This confrontation arrives at a critical juncture in the global discourse surrounding artificial intelligence, a field where rapid advancement has increasingly outpaced regulatory frameworks. Max Tegmark, a physicist at MIT and founder of the Future of Life Institute (FLI), established in 2014, has been a vocal advocate for cautious AI development. Tegmark was instrumental in organizing the widely publicized open letter calling for a pause in advanced AI experiments, an appeal that garnered over 33,000 signatories, including notable figures like Elon Musk.
Tegmark views the current crisis with Anthropic as a predictable outcome, arguing that the company, like its industry peers, has contributed to its own predicament by resisting external regulation. For years, leading AI firms such as Anthropic, OpenAI, and Google DeepMind have espoused a commitment to self-governance and responsible AI development. However, in a move that underscores the shifting landscape, Anthropic recently modified a core tenet of its own safety pledge, a promise to withhold the release of increasingly powerful AI systems until confidence in their safety could be established.
The Erosion of Safety Commitments and the Regulatory Void
Tegmark’s perspective is that the absence of robust, legally binding regulations has left these AI giants vulnerable. "The road to hell is paved with good intentions," Tegmark remarked in an interview conducted shortly after the news broke. "It’s so interesting to think back a decade ago, when people were so excited about how we were going to make artificial intelligence to cure cancer, to grow the prosperity in America and make America strong. And here we are now where the U.S. government is pissed off at this company for not wanting AI to be used for domestic mass surveillance of Americans, and also not wanting to have killer robots that can autonomously – without any human input at all – decide who gets killed."
The perceived contradiction between Anthropic’s self-proclaimed safety-first identity and its past collaborations with defense and intelligence agencies, dating back to at least 2024, has drawn scrutiny. Tegmark offered a critical assessment: "Yes, Anthropic has been very good at marketing themselves as all about safety. But if you actually look at the facts rather than the claims, what you see is that Anthropic, OpenAI, Google DeepMind and xAI have all talked a lot about how they care about safety. None of them has come out supporting binding safety regulation the way we have in other industries."
This pattern of abandoned commitments is concerning. Google, once known for its "Don’t be evil" slogan, has reportedly dropped longer-term commitments to prevent harm with AI, ostensibly to facilitate sales for surveillance and weapons. OpenAI has removed the word "safety" from its mission statement, and xAI recently disbanded its entire safety team. Anthropic’s latest modification to its safety pledge, the promise not to release powerful AI systems until confident in their safety, represents a significant departure from its foundational principles.
The Perils of Corporate Self-Regulation
The persistent lobbying efforts by major AI companies against regulation, often couched in arguments of self-governance, have resulted in a significant regulatory vacuum in the United States. Tegmark drew a stark analogy: "We right now have less regulation on AI systems in America than on sandwiches. You know, if you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won’t let you sell any sandwiches until you fix it. But if you say, ‘Don’t worry, I’m not going to sell sandwiches, I’m going to sell AI girlfriends for 11-year-olds, and they’ve been linked to suicides in the past, and then I’m going to release something called superintelligence which might overthrow the U.S. government, but I have a good feeling about mine’ – the inspector has to say, ‘Fine, go ahead, just don’t sell sandwiches.’"
This lack of oversight, Tegmark argues, is a shared responsibility among the industry’s leading players. "If they had taken all these promises that they made back in the day for how they were going to be so safe and goody-goody, and gotten together, and then gone to the government and said, ‘Please take our voluntary commitments and turn them into U.S. law that binds even our most sloppy competitors’ – this would have happened. Instead, we’re in a complete regulatory vacuum." He warned that such corporate amnesty echoes historical instances of unchecked industries leading to catastrophic outcomes, citing the thalidomide tragedy, tobacco companies targeting minors, and asbestos-related lung cancer.
The absence of a legal framework explicitly prohibiting the development of AI for harmful purposes, such as targeting American citizens, leaves these companies susceptible to government demands. "There is no law right now against building AI to kill Americans, so the government can just suddenly ask for it," Tegmark stated. "If the companies themselves had earlier come out and said, ‘We want this law,’ they wouldn’t be in this pickle. They really shot themselves in the foot."
Countering the "China Card" Argument
A recurring justification for rapid AI development and a reluctance to impose stringent regulations is the perceived race against China. Proponents of this view argue that if American companies hesitate, Beijing will seize the technological advantage. However, Tegmark challenges this narrative. "Let’s analyze that," he urged. "The most common talking point from the lobbyists for the AI companies – they’re now better funded and more numerous than the lobbyists from the fossil fuel industry, the pharma industry and the military-industrial complex combined – is that whenever anyone proposes any kind of regulation, they say, ‘But China.’"
Tegmark points to China’s own regulatory actions as evidence against this argument. "China is in the process of banning AI girlfriends outright. Not just age limits – they’re looking at banning all anthropomorphic AI. Why? Not because they want to please America but because they feel this is screwing up Chinese youth and making China weak. Obviously, it’s making American youth weak, too."
Furthermore, the notion that developing superintelligence is a means to outcompete China is fundamentally flawed, according to Tegmark. "When we don’t actually know how to control superintelligence, so that the default outcome is that humanity loses control of Earth to alien machines – guess what? The Chinese Communist Party really likes control. Who in their right mind thinks that Xi Jinping is going to tolerate some Chinese AI company building something that overthrows the Chinese government? No way. It’s clearly really bad for the American government too if it gets overthrown in a coup by the first American company to build superintelligence. This is a national security threat."
Superintelligence as a National Security Threat
Tegmark frames the development of uncontrollable superintelligence not as a strategic asset but as a profound national security threat. He posits that if national security officials truly understood the implications of AI leaders like Dario Amodei describing visions of "a country of geniuses in a data center," they would recognize the inherent dangers to U.S. governance. This realization, he believes, is dawning within the national security community.
He draws a parallel to the Cold War, where the race for economic and military dominance against the Soviet Union did not necessitate a race to detonate nuclear weapons. "People realized that was just suicide. No one wins. The same logic applies here." The unchecked pursuit of superintelligence, without robust control mechanisms, carries the risk of existential catastrophe for all nations, regardless of who develops it first.
The Accelerating Pace of AI Advancement
The timeline for achieving advanced AI capabilities has dramatically compressed. Tegmark notes that just six years ago, most AI experts predicted that human-level language and knowledge mastery in AI was decades away. Today, that benchmark has been surpassed. "We’ve seen AI progress quite rapidly from high school level to college level to PhD level to university professor level in some areas," he observed. He cited the International Mathematics Olympiad, where AI secured a gold medal, as an example of AI’s encroaching capabilities in highly complex human tasks.
In a recent paper co-authored with prominent AI researchers like Yoshua Bengio and Dan Hendrycks, a rigorous definition of Artificial General Intelligence (AGI) was established. According to their analysis, GPT-4 represented 27% of the way to AGI, and GPT-5 reached 57%. This rapid progression, from 27% to 57% in a short period, suggests that the advent of AGI may be nearer than previously anticipated. Tegmark warned his MIT students that even a four-year timeline could mean significant job displacement upon their graduation, emphasizing the urgency of preparing for these transformative changes.
The Industry’s Response and a Path Forward
The blacklisting of Anthropic presents a critical moment of reckoning for the AI industry. The question remains whether other major AI players will align with Anthropic’s stance or if competitors will seek to capitalize on the void. Hours after the interview, OpenAI announced its own deal with the Pentagon, albeit with stated technical safeguards.
Tegmark expressed admiration for Sam Altman’s initial declaration of solidarity with Anthropic and his adherence to similar "red lines." However, the silence from companies like Google and xAI has been notable. Tegmark suggested that their inaction would be "incredibly embarrassing" and a potential indicator of their true priorities. This situation, he concluded, is a moment of truth, forcing every major AI entity to "show their true colors."
Despite the current turmoil, Tegmark sees a potential for a positive outcome. "Yes, and this is why I’m actually optimistic in a strange way," he stated. "There’s such an obvious alternative here. If we just start treating AI companies like any other companies – drop the corporate amnesty – they would clearly have to do something like a clinical trial before they released something this powerful, and demonstrate to independent experts that they know how to control it. Then we get a golden age with all the good stuff from AI, without the existential angst. That’s not the path we’re on right now. But it could be." This path, he suggests, involves treating AI development with the same rigor and regulatory oversight applied to industries with a tangible impact on public health and safety, ensuring that the benefits of AI are realized without compromising human well-being or global security.
