The Strategic Case for AI Winter: What Clausewitz and Moravec Tell Us About Preparing for Another Change of Season
All views in this newsletter are my own and do not represent the views of The R Street Institute, the US Navy, or any other organization I am affiliated with.
As AI valuations soar and investment reaches fever pitch, concerns about an inevitable bubble burst grow louder. Market corrections feel destructive when they happen, but history suggests that technology bubbles often serve a crucial strategic purpose. They create an environment that separates hype from sustainable innovation.
In the aftermath of the Manhattan Project's success, America was intoxicated by the possibility of the atomic revolution. Speculation for what peaceful nuclear research could mean for the world was abundant. During this frenzy, Caltech physicist R.M. Langer promoted uranium-235 powered cars in Popular Mechanics while other visionaries promised atomic energy too cheap to meter and radioisotopes that would revolutionize agriculture. The federal government caught nuclear fever, and the Atomic Energy Commission funded hundreds of civilian nuclear projects through Project Plowshare including nuclear-powered aircraft and atomic earthmoving equipment. Then reality intervened and the gap between nuclear theory and nuclear practice had become painfully clear. Focus shifted from the fantastic promise of nuclear to a sober strategic doctrine that focused on deterrence theory, arms control frameworks, and realistic assessments of nuclear capabilities. The burst of the nuclear bubble served America far better than what the initial euphoria would have brought about over the long-term.
Today's AI environment mirrors mid-century atomic fever in many ways, complete with promises and prognostications that outrun practical reality. Unlike nuclear technology, AI faces a fundamental paradox that guarantees friction between promise and performance.
Moravec's Paradox Meets Clausewitzian Friction
Roboticist Hans Moravec identified a crucial insight: "It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility." AI excels at what humans find cognitively difficult, tasks like chess, mathematical proofs, and pattern recognition in vast datasets. AI struggles with what humans consider trivial, tasks like physical manipulation, common sense reasoning, contextual judgment.
Strategic thinkers will surely draw parallels between Moravec's Paradox and the friction Carl von Clausewitz warned about in On War. "Everything in war is very simple, but the simplest thing is difficult." AI demonstrations showcase the impressive but narrow capabilities while hiding the mundane integration challenges that determine real-world utility. The gap between laboratory performance and operational deployment mirrors Clausewitz's insight about plans surviving contact with reality.
The Strategic Cost of Bubble Thinking
Current AI investment patterns reveal the consequences of this friction. Venture capital flows toward those capable of impressive narrow demonstrations such as large language models generating human-like text, computer vision systems beating radiologists at image recognition, game-playing algorithms achieving superhuman performance. Meanwhile, the unglamorous work of integrating these capabilities into reliable operational systems receives far less attention and funding.
The result is the potential for strategic misallocation. We could be building AI systems that excel at cognitive tasks humans find difficult while ignoring the "simple" problems that present significant challenges to practical deployment. For instance, AI medical diagnosis systems achieve remarkable accuracy on diagnostic quizzes but struggle with basic clinical reasoning. A 2025 NIH study found that while AI scored well on medical diagnostic tests, it "often made mistakes when describing the medical image and explaining its reasoning behind the diagnosis — even in cases where it made the correct final choice."
This mirrors classic American technological overconfidence. Time and again we have collectively been fooled into believing that superior performance on measurable metrics translates automatically to strategic advantage. We're optimizing for the wrong variables while our competitors focus on practical integration and deployment.
Why AI Winter Would Restore Strategic Discipline
An AI bubble collapse would force the reality recalibration that American strategy could leverage to its own benefit. When venture funding dries up and investor expectations reset, resources naturally redirect from flashy demonstrations toward solving actual deployment challenges.
Historical precedent supports this optimism. The "AI Winter" of the 1980s and 1990s, when AI hype collapsed amid failed promises, ultimately strengthened the field. Researchers abandoned grand theories in favor of practical algorithms. The machine learning revolution of the 2000s emerged from this disciplined foundation, not from the expert systems that dominated AI's previous bubble.
A new AI winter would accelerate three strategic benefits America needs: Resource consolidation toward strategic applications rather than consumer novelties. Realistic capability assessment for policy planning. Time to develop thoughtful AI doctrine rather than reactive regulation.
While competitors focus on the short-term disruption, there is an opportunity for America to exhibit strategic patience during post-bubble winters to create long-term positioning advantages. China's current AI investments heavily favor surveillance and control applications with clear integration pathways. A winter that forces similar strategic discipline in American AI development would strengthen rather than weaken our competitive position.
Embracing Friction as Strategic Advantage
Clausewitz's insight applies directly. Understanding friction creates strategic advantage over those who ignore it. Moravec's paradox reveals where AI friction will most likely emerge. Rather than fighting this reality, strategic thinking suggests embracing it.
An AI winter would force American technology development toward our comparative advantages: complex system integration, operational reliability, human-machine collaboration. These capabilities matter more for sustained strategic advantage than impressive demonstration metrics.
What This Means For...
Policymakers: Stop regulating science fiction scenarios and start planning for long term competition. Heed the lessons to be learned from Moravec's paradox. AI policy should focus on integration challenges and operational friction, not theoretical capabilities that may never materialize at scale.
U.S. strategic competition: An AI winter would benefit those who employ disciplined strategic thinking. While others focus on demonstration metrics, the U.S. could lean in on practical deployment capabilities that create sustainable competitive advantages.
Tech companies: Moravec's paradox reveals where sustainable business models exist. Companies solving integration challenges and operational friction will survive the winter better than those optimizing for impressive but impractical capabilities.
Aspiring strategic thinkers: Once again, a long-dead Prussian strategist's principle remains as relevant for technology strategy as military strategy. Understanding the gap between theoretical performance and practical reality provides analytical advantage over those seduced by impressive demonstrations.