Carl von Clausewitz vs the Precautionary Principle: Why You Cannot Regulate Away Uncertainty
All Views in this newsletter are my own and do not represent the views of The R Street Institute, the US Navy, or any other organization I am affiliated with.
Carl von Clausewitz understood something that today's AI policymakers seem to have forgotten: you cannot regulate away uncertainty. In On War, perhaps the most rewarding and frustrating book a strategist will read, he warned against the fatal mistake of preparing for conflict "as you imagine it" rather than adapting to war "as it develops." Today, this is precisely the error driving much of today's attempts at artificial intelligence regulation.
The Prussian strategist emphasized that war's defining characteristic is friction, the ever-present gap between theoretical plans and the chaos that is reality. "Everything in war is very simple," he wrote, "but the simplest thing is difficult" (On War Book 1 Chapter 7). Modern AI policy ignores this insight and suffers from the delusion that complex technological development can be predicted and controlled through precautionary regulation.
Consider the European Union's AI Act, a 400-page behemoth that attempts to categorize and regulate "high-risk AI systems" before most of these systems even exist. The legislation reads like a response to science fiction scenarios rather than current technological capabilities. Brussels is essentially fighting HAL 9000 while current-generation LLMs struggle to reliably solve basic math problems.
This represents exactly what Clausewitz called "war on paper" (On War Book 1 Chapter 7), elaborate plans collapse upon contact with reality. This concept hasn't remained locked away in a 19th Century Prussian treatise on war; it has been articulated throughout history by Helmuth von Moltke, Dwight Eisenhower, and most eloquently by Mike Tyson who said "everybody has plans until they get hit for the first time." The AI Act assumes technological development follows predictable pathways that can be regulated in advance. Unfortunately, innovation, like warfare, is fundamentally interactive and adaptive. Developers respond to regulatory constraints by finding new approaches, creating capabilities regulators never anticipated, and rendering yesterday's carefully worded regulatory approaches obsolete.
The precautionary principle persists because it sounds responsible and reasonable. Who wouldn't want to prevent harm before it occurs? When applied to rapidly evolving technology, it represents an enormous strategic liability. Clausewitz understood that excessive caution often creates the very vulnerabilities it seeks to prevent. Armies that spend too much energy preparing for imaginary threats lose the flexibility needed to respond to actual ones.
This isn't a problem exclusive to the EU as American policymakers are making similar mistakes. Policymakers often discuss AI risks in terms that sound lifted from science fiction rather than grounded in current technological realities. Senators whose understanding of the internet is "a series of tubes" are drafting legislation to prevent artificial general intelligence from enslaving mankind.
Meanwhile, the AI systems actually being deployed like recommendation algorithms, automated hiring tools, facial recognition systems, operate with far more mundane capabilities and limitations and have been around for years. Current AI seems intelligent, but cannot reason, plan, or adapt the way humans do. It processes patterns in training data and generates responses based on statistical correlations. This is neither artificial general intelligence nor the pathway to digital consciousness that precautionary regulations attempt to address.
The disconnect between regulatory imagination and technological reality creates what Clausewitz would recognize as a classic strategic blunder. These regulations are concentrating forces against the wrong enemy. While policymakers obsess over speculative AGI scenarios, they pay insufficient attention to how AI systems actually function and fail in practice.
This misallocation of attention has real consequences. Premature regulation based on fictional scenarios can stifle beneficial innovation while failing to address actual problems. When regulators focus on preventing AI from becoming too powerful, they may inadvertently prevent it from becoming useful. The result is policy that satisfies science fiction anxieties while creating genuine technical challenges that hold back progress.
Clausewitz argued that successful strategy requires understanding your opponent as they are, not as you fear they might become. AI systems are not adversaries plotting against humanity, they are tools used by humans that have specific capabilities and limitations. Effective regulation requires understanding these realities rather than legislating against Hollywood narratives.
The Prussian strategist also emphasized the importance of maintaining strategic flexibility. "The military machine—the army and everything belonging to it—is basically very simple and therefore easy to manage," he wrote. "But we should bear in mind that none of its components is of one piece: each part is composed of individuals" (On War Book 1 Chapter 7). The same principle applies to AI development: complex systems emerge from countless individual decisions by programmers, researchers, and users. No regulatory framework can predict or control this emergence in advance.
This doesn't mean ignoring AI's potential risks or abandoning oversight entirely. Instead, it suggests adopting Clausewitz's approach: prepare for uncertainty, maintain adaptability, and respond to actual developments rather than imaginary scenarios. Effective AI governance requires regulatory systems that can evolve with the technology rather than attempting to constrain innovation based on speculative fears.
The greatest strategic mistake, Clausewitz warned, is fighting the last war. In AI policy, we're making an even worse error: fighting tomorrow's war informed by yesterday's science fiction. Reality, as always, will prove more complex, surprising, and manageable than our fears suggest.
"What This Means For..."
Policymakers: Wait for actual harms to emerge before creating new regulatory frameworks. Current AI systems don't justify the precautionary regulations being proposed. When problems do arise, address them through existing authorities rather than inventing new bureaucratic structures. Stop proposing hundreds of state-level AI bills per month!
U.S. strategic competition: While the US policy debate centers on AGI scenarios and policymakers implement precautionary restrictions, global competitors can gain advantages in practical AI deployment and integration. Strategic flexibility matters more than regulatory perfection.
Tech companies: Prepare for adaptive regulation rather than compliance with static rules that may not match technological reality. Inform lawmakers as to the technological realities of your products. Understand that Innovation will continue to outpace regulatory frameworks designed for imaginary capabilities.
Aspiring strategic thinkers: Clausewitz's principles about friction, uncertainty, and adaptive strategy apply beyond military affairs to any complex, evolving system. The gap between theoretical plans and chaotic reality exists in technology policy just as much as warfare. That dusty book that students hate to read is applicable in the modern era.