The intersection of artificial intelligence and national security has entered a new phase of divergence as specialized startups move to fill the void left by established AI labs. While industry giants like Anthropic express growing reservations regarding the unfettered use of their models by the United States military, a new generation of defense-focused firms is emerging with the explicit goal of building AI for the battlefield. At the forefront of this shift is Smack Technologies, a startup that recently announced a $32 million funding round aimed at developing advanced models specifically designed for military planning and execution. This development highlights a widening schism in Silicon Valley between "frontier" AI labs focused on safety and general-purpose utility and specialized firms that view military integration as both a strategic necessity and a primary business objective.
The funding for Smack Technologies arrives at a critical juncture for the Department of Defense (DoD). As the Pentagon seeks to integrate generative AI into its "kill chain"—the sequence of steps required to identify, track, and engage a target—it has encountered resistance from companies like Anthropic. These disagreements culminated in the breakdown of a proposed $200 million contract, leading to Defense Secretary Pete Hegseth labeling Anthropic a "supply chain risk." In contrast, Smack Technologies, led by veterans of the U.S. Marine Forces Special Operations Command (MARSOC), is positioning itself as a more compliant and mission-aligned partner for the U.S. government.
The Strategic Pivot: Specialized vs. General-Purpose AI
The primary tension in the current military AI landscape stems from the inherent limitations of large language models (LLMs) like Claude or GPT-4. While these systems excel at synthesizing vast amounts of text and generating human-like reports, they were not designed for the rigors of tactical environments. Smack Technologies CEO Andy Markoff, a former commander who served in Iraq and Afghanistan, argues that general-purpose models lack a fundamental understanding of the physical world and the nuances of military doctrine.
"I can tell you they are absolutely not capable of target identification," Markoff stated, referring to current frontier models. He notes that while Claude is effective for administrative tasks and report summarization, it cannot reliably control physical hardware or navigate the complexities of a kinetic environment. Smack Technologies aims to surpass these capabilities by training models on specialized datasets that general-purpose labs do not have access to or interest in utilizing.
The startup’s leadership reflects a blend of elite military experience and high-level Silicon Valley engineering. Markoff cofounded the company with Clint Alanis, another former Marine, and Dan Gould, a computer scientist who previously served as the Vice President of Technology at Tinder. This combination of "boots on the ground" expertise and consumer-scale software engineering is intended to bridge the gap between abstract AI development and the practical realities of mission planning.
Chronology of the Military-AI Schism
The rise of companies like Smack Technologies can be traced back to several key events over the past decade that have defined the relationship between the tech sector and the Pentagon:
- 2018: Project Maven and the Google Employee Revolt: The initial flashpoint occurred when Google employees protested the company’s involvement in Project Maven, a DoD initiative to use AI for analyzing drone footage. The backlash led Google to withdraw from the project and establish a set of AI Principles that restricted the development of AI for weapons.
- 2023: The Rise of Generative AI in the Navy: As LLMs gained prominence, the U.S. Navy began testing autonomous systems in the Persian Gulf. These systems were primarily used for identifying drones operated by Iranian-backed insurgents, signaling a shift toward more integrated AI usage in active theaters.
- Early 2024: The Anthropic Contract Breakdown: Tensions peaked when Anthropic sought to include specific limitations on how its models could be used in autonomous weapons systems. The DoD, prioritizing flexibility and "decision dominance," viewed these restrictions as a liability.
- Late 2024: The Designation of Supply Chain Risks: Following the collapse of the Anthropic deal, the U.S. government signaled a preference for "defense-first" AI companies, leading to a surge in venture capital for startups like Smack Technologies and Anduril.
Technical Methodology: The AlphaGo Approach to Warfare
Smack Technologies is not merely building a chatbot for soldiers; it is developing a planning engine. The company utilizes a training methodology similar to the one Google’s DeepMind used to train AlphaGo, the program that defeated the world champion in the game of Go in 2017. This involves a process of reinforcement learning through trial and error.
In Smack’s case, the "game" is a series of complex war game scenarios. The model is run through thousands of simulated missions, where it must choose strategies, allocate resources, and react to enemy movements. Expert analysts—often veterans with decades of tactical experience—provide a "reward signal," informing the model whether its chosen strategy would likely result in success or failure in a real-world context.
This approach requires significant capital. While Smack does not have the multi-billion-dollar budgets of OpenAI or Google, Markoff confirmed that the startup is spending millions of dollars on compute power to train its initial models. The goal is to automate the "drudgery" of mission planning, which Markoff says is still largely performed using antiquated methods such as whiteboards, notepads, and manual map overlays.
The Ethical Framework: Accountability in Uniform
One of the most significant points of contention between traditional AI labs and defense startups is the question of who is responsible for the ethical use of technology. Anthropic and its peers often attempt to build "guardrails" into the software itself to prevent misuse. Markoff, however, argues that ethics in warfare cannot be coded into an algorithm; they must reside with the human operators.
"When you serve in the military, you take an oath you’re going to serve honorably, lawfully, in accordance with the rules of war," Markoff said. "To me, the people who deploy the technology and make sure it is used ethically need to be in a uniform."
This philosophy suggests a shift in accountability. Rather than the developer restricting the tool, the military institution is expected to govern the soldier’s use of the tool. This stance is welcomed by the DoD, which has long argued that overly restrictive software could put American lives at risk by slowing down decision-making in high-speed combat scenarios.
Official Responses and Counterarguments
The move toward autonomous mission planning has drawn sharp criticism from non-governmental organizations and academic researchers. Anna Hehir, head of military AI governance at the Future of Life Institute, warns that the current trajectory is fraught with danger. Her organization, which opposes the development of AI-controlled autonomous weapons, argues that AI is fundamentally too unpredictable for high-stakes lethal environments.
"AI is too unreliable, unpredictable, and unexplainable to be used in such high-stakes scenarios," Hehir stated. She noted that current systems struggle to distinguish between combatants and non-combatants, or to recognize the act of surrender—nuances that are critical to adhering to international law.
Legal experts also point out that the definition of "autonomy" is already shifting. Rebecca Crootof, a professor at the University of Richmond School of Law and an authority on the legal issues surrounding autonomous weapons, notes that the U.S. and 30 other nations are already deploying systems with varying degrees of autonomy. This includes missile defense systems like the Aegis Combat System, which can identify and engage threats at speeds that exceed human reaction times.
Broader Implications and Decision Dominance
The ultimate goal for the Pentagon in adopting specialized AI is "decision dominance." In a potential conflict with a "near-peer" adversary such as Russia or China, the speed at which a military can process information and issue orders could be the deciding factor. Automated decision-making tools could allow commanders to cycle through the OODA loop (Observe, Orient, Decide, Act) faster than an opponent relying on traditional human-centric planning.
However, the risks of escalation remain a primary concern for the global community. A recent study conducted by researchers at King’s College London utilized LLMs in simulated war games and found an alarming tendency for the models to escalate conflicts, in some cases even resorting to nuclear strikes. The study suggested that AI models might interpret aggressive postures as the most "efficient" way to end a conflict, failing to account for the catastrophic human cost or the unpredictability of human emotion.
Markoff acknowledges these risks but maintains that the chaos of war necessitates better tools, not fewer. "I have never executed an operation in the real world that even went 50 percent according to plan, and that’s not going to change," he remarked. The objective of Smack Technologies is not to eliminate the fog of war, but to provide a digital framework that allows commanders to navigate it more effectively than their adversaries.
As Smack Technologies begins to deploy its $32 million in capital, the results will likely serve as a litmus test for the future of the American defense industry. If specialized, mission-aligned AI proves more effective and reliable than general-purpose models, it could trigger a permanent migration of federal funding away from Silicon Valley’s traditional tech giants and toward a new breed of "defense-first" software companies. The battle lines are no longer just being drawn in the sand; they are being coded in the cloud.
