A newly released podcast interview with Emil Michael, a senior technology official at the Department of Defense, is casting a spotlight on both his lingering grievances from his time at Uber and his current role in the government’s contentious relationship with AI firm Anthropic. The interview, recorded last month by Joubin Mirzadegan, a partner at venture capital firm Kleiner Perkins, offers a rare and candid glimpse into Michael’s perspectives on past professional upheavals and the complex challenges of integrating advanced artificial intelligence within national security frameworks.
Michael, who transitioned from the high-octane world of Silicon Valley to public service, appears to be navigating two significant arenas where his strategic thinking and past experiences are coming to the fore. While the podcast touches on a broad spectrum of topics, including his personal history, his remarks regarding his departure from Uber, and his strong feelings about the trajectory of autonomous vehicle technology, have particularly resonated, alongside his detailed articulation of the Department of Defense’s dispute with Anthropic.
Lingering Bitterness: The Uber Ouster and the Ghost of Autonomous Driving
The most striking revelations from the interview concern Michael’s exit from Uber. When directly questioned about whether he was dismissed alongside former CEO Travis Kalanick, Michael responded with a succinct, "Effectively." This understated admission belies a deep-seated resentment that he openly expressed. He detailed his resignation, which occurred just eight days prior to Kalanick’s departure in 2017, as a consequence of a workplace investigation. This inquiry, led by former U.S. Attorney General Eric Holder, was initiated in response to allegations of sexual harassment and gender discrimination within the company. While Michael was not personally named in these allegations, the investigation’s findings recommended his removal. Kalanick subsequently resigned under pressure from prominent investors, including Benchmark, in what was widely described as a shareholder revolt.
Michael’s declaration, "I’ll never forget that, nor forgive," leaves no room for ambiguity regarding his feelings about the circumstances of his departure. This sentiment appears to be shared by Kalanick, with both men reportedly believing that the investors who forced them out ultimately stifled Uber’s potential in the crucial field of autonomous driving. They viewed autonomous technology not merely as a future product, but as the very core of Uber’s long-term viability and a pathway to becoming a trillion-dollar enterprise.
During the podcast, Michael articulated this perspective, arguing that the investor-driven decision was motivated by a desire to secure short-term financial gains rather than fostering sustained, long-term growth and innovation. "They wanted to preserve their embedded gains, rather than try to make this a trillion dollar company," he stated. This viewpoint echoes Kalanick’s own public sentiments. At the Abundance Summit in Los Angeles the previous year, Kalanick highlighted the significant progress Uber had made in autonomous driving, asserting that its program was second only to Waymo at the time of its discontinuation. He expressed regret, noting, "You could say, ‘Wish we had an autonomous ride-sharing product right now. That would be great.’"
The eventual sale of Uber’s self-driving unit, Uber ATG, to Aurora in 2020, three years after Michael and Kalanick’s departures, was widely characterized as a "fire sale." At the time, the decision seemed commercially rational. The autonomous driving sector was a significant drain on capital, and the technology itself appeared distant from widespread practical application. However, with Waymo now operating robotaxi services in ten U.S. cities and actively expanding its reach, the question of whether Uber missed a pivotal opportunity due to internal dynamics continues to resonate.
A New Frontier: The DoD’s AI Battleground with Anthropic
Beyond the echoes of his Uber past, Michael’s current role at the Department of Defense (DoD) has placed him at the center of a high-stakes dispute with Anthropic, a leading artificial intelligence company. The interview, recorded shortly before the DoD’s negotiations with Anthropic publicly unraveled, provides an in-depth look at Michael’s strategic thinking on this critical national security issue.
Michael described Anthropic as one of a select group of approved vendors for large language models (LLMs) within the DoD, a designation partially secured through its collaborations with Palantir. He emphasized the stringent regulatory environment within which the DoD operates, characterized by an overwhelming density of laws, regulations, and internal policies that he metaphorically described as nearly "choking" the department. It is within this complex ecosystem that Anthropic, according to Michael, sought to introduce its own set of operational guidelines.
"What I can’t do is have any one company impose their own policy preferences on top of the laws and on top of my internal policies," Michael asserted. He employed an analogy to illustrate his point: "If you buy the Microsoft Office Suite, they don’t tell you what you could write in a Word document, or what email you can send." This analogy highlights his core concern: that a private entity’s self-imposed restrictions should not supersede governmental directives and legal frameworks.
Michael further elaborated on the potential national security risks, referencing a recent finding published by Anthropic itself concerning "distillation attacks." He explained that Chinese technology companies have been repeatedly targeting Anthropic’s models using this technique, effectively reverse-engineering them to replicate their capabilities. Michael argued that, due to China’s civil-military fusion laws, this would grant the People’s Liberation Army access to a functional equivalent of Anthropic’s fully unrestricted model. Meanwhile, the DoD would be compelled to operate with a restricted version, bound by Anthropic’s own ethical guidelines. "I’d be one-armed, tied behind my back against an Anthropic model that’s fully capable – by an adversary," Michael stated, describing the situation as "totally Orwellian."
He posed a rhetorical question to American technology firms, particularly those he considers "champions" and vital to the nation’s technological advancement: "If you’re an American champion – and I believe they are, they’re one of the most important companies in the country – don’t you want to help your Department of War succeed with the best tools available?" This underscores his belief that national security imperatives should align with the capabilities offered by leading domestic AI developers, rather than being constrained by their internal policies.
Escalation to Litigation: The Legal Battle Unfolds
The dispute between the DoD and Anthropic has since transitioned from the negotiation table to the courtroom, signaling a significant escalation in the conflict. In late February, Defense Secretary Pete Hegseth officially designated Anthropic as a "supply-chain risk." This assessment was further substantiated last week with the filing of a comprehensive 40-page brief in the U.S. District Court for the Northern District of California. The government’s legal filing argued that granting Anthropic access to the DoD’s war-fighting infrastructure would introduce "unacceptable risk" into its supply chains. A key concern articulated in the brief is the theoretical possibility that Anthropic could disable or alter its technology to serve its own interests rather than those of the nation, particularly during times of conflict.
Anthropic has vigorously contested these claims, filing sworn declarations alongside its own brief on Friday. The company argues that the government’s case is predicated on technical misunderstandings and assertions that were never raised during months of prior negotiations. Thiyagu Ramasamy, Anthropic’s head of public sector, submitted a declaration directly challenging the government’s assertion that Anthropic could interfere with military operations by disabling or altering its technology’s behavior, stating that such an action is technically infeasible.
A crucial hearing is scheduled for Tuesday in San Francisco, where the court will hear arguments from both sides. This legal battle represents a pivotal moment in the government’s efforts to procure and deploy advanced AI technologies while simultaneously mitigating potential security vulnerabilities and ensuring alignment with national interests. The outcome is likely to have significant implications for the broader landscape of AI procurement within the U.S. defense sector and set precedents for future collaborations between government agencies and private AI developers.
Broader Implications: The Future of AI in Defense and Industry
The dual narratives emerging from Emil Michael’s recent interview highlight a critical juncture for both the technology industry and national security. His reflections on the autonomous driving fallout at Uber underscore a recurring tension between aggressive innovation and financial prudence, a dilemma that continues to shape the development and deployment of cutting-edge technologies. The persistent belief held by Michael and Kalanick that Uber prematurely abandoned a potentially transformative technology due to short-sighted investor interests serves as a cautionary tale for ambitious ventures.
Simultaneously, Kalanick’s continued engagement in the robotics sector, including the launch of his new company, Atoms, and his significant investment in and impending acquisition of the autonomous vehicle startup Pronto, demonstrates an unwavering commitment to the future of autonomous systems. This suggests that the drive for innovation in this space, though perhaps redirected, remains a potent force among key industry figures.
Michael’s current role at the DoD positions him at the forefront of another complex technological challenge: integrating AI into critical national defense operations. The standoff with Anthropic is not merely a contractual dispute; it represents a fundamental disagreement about control, risk, and the appropriate boundaries for AI development and deployment within a national security context. The DoD’s concerns about supply-chain risks and the potential for adversarial exploitation of AI technologies are legitimate and growing. As LLMs and other advanced AI capabilities become increasingly sophisticated, governments worldwide are grappling with how to harness their power while safeguarding against potential misuse and ensuring technological sovereignty.
The legal proceedings between the DoD and Anthropic will undoubtedly be closely watched by industry stakeholders, policymakers, and international observers. The case will likely illuminate the intricate balance required to foster innovation in AI while upholding national security imperatives. The resolution could influence the development of regulatory frameworks, procurement policies, and the very nature of public-private partnerships in the critical field of artificial intelligence, particularly as it pertains to defense and national security applications. The stakes are high, not only for the involved parties but for the broader trajectory of AI’s integration into the fabric of modern governance and warfare.
