Why Arms Control Approaches to AI Guarantee the Strategic Surprise We're Trying to Prevent

Why Arms Control Approaches to AI Guarantee the Strategic Surprise We're Trying to Prevent

All views in this newsletter are my own and do not represent the views of The R Street Institute, the US Navy, or any other organization I am affiliated with.

In 1922, the world's great powers did something unprecedented: they agreed to stop building the weapons they saw as critical to naval warfare. The Washington Naval Treaty sought to end an arms race by limiting battleship construction by the United States, Britain, Japan, France, and Italy. It was hailed as a triumph of high-minded rationality over base militarism.

For fourteen years, it seemed to have worked. Until Japan withdrew in 1936 and began secretly building the Yamato and Musashi. These were battleships so massive they made treaty-compliant vessels look like patrol boats. By the time American and British intelligence discovered what Japan had built, it was too late. It's exceedingly difficult to rapidly reconstitute naval shipbuilding capability that's been dormant for over a decade and catch up to a competitor with a head start.

Today, policymakers are trying to make the same error with artificial intelligence.

Arms Control Comes to AI

The EU AI Act prohibits certain AI applications outright such as real-time biometric identification in public spaces, social scoring systems, and AI that manipulates human behavior. These bans apply regardless of any future potentially beneficial use cases. In the United States, California's recently vetoed SB 1047 would have required AI developers to implement kill switches and face liability for model misuse. Internationally, autonomous weapons ban discussions explicitly invoke arms control precedents.

The pattern is clear: policymakers see dangerous AI capabilities and reach for the policy tool that worked for physical weapons. In essence, they're proposing arms control for digital technologies.

Why Digital Technology Differs from Battleships

The Washington Naval Treaty worked initially because battleships had certain verifiable characteristics. While Japan managed to hide the specifications of the Yamato-class during construction, you still couldn't hide the industrial infrastructure required: massive shipyards, steel production, specialized facilities. Intelligence services could at least know something was being built, even if exact capabilities remained unclear until too late.

AI development is fundamentally different and far less observable. There's no equivalent industrial infrastructure to monitor. A capable research team with access to compute can develop advanced AI systems without the massive, visible industrial footprint that even secret battleship construction required. This was exemplified by the development of DeepSeek R1, which has been categorized as a "Sputnik moment."

More importantly, AI capabilities aren't countable in any meaningful sense. Even when Japan hid Yamato's exact tonnage, "battleship tonnage" remained a coherent metric that could theoretically be verified. What's the equivalent for AI? Model parameters? Compute used for training? These metrics are gameable, unverifiable, and often irrelevant to actual capability.

Most critically, AI research is inherently dual-use. The same research enabling medical diagnosis also enables autonomous weapons targeting. The same natural language processing powering customer service could power social manipulation systems. You can't ban dangerous applications without constraining beneficial ones. More precisely, you can try to ban them, but your competitors won't make the same choice.

The Asymmetry Problem

The Washington Naval Treaty failed because it required symmetric participation. When Japan withdrew and built the Yamato-class battleships, the treaty's other signatories discovered too late that while unilateral restraint may demonstrate moral leadership, it clearly demonstrates strategic naivety.

Current AI governance proposals replicate this error with worse consequences. Democratic nations constrain their AI development through regulation while authoritarian competitors face no equivalent limits. The EU bans certain AI applications; China deploys them throughout Xinjiang. California considers AI safety requirements; Beijing integrates AI into state surveillance without constraint.

This asymmetry compounds over time. Every capability we ban domestically while competitors develop it without constraint widens a gap that becomes progressively harder to close. Unlike physical weapons, you can't simply build more AI systems to close capability gaps. Expertise takes years to develop. If we allow that knowledge base to atrophy through regulatory constraint while competitors advance, reconstituting capability becomes extraordinarily difficult.

To be clear: this isn't an argument for developing AI surveillance capabilities or autonomous weapons without constraints. It's an argument that unilateral bans create strategic vulnerabilities without improving global safety. If China develops facial recognition AI while we prohibit it, we haven't prevented surveillance dystopia, we've just ensured only authoritarian regimes possess those capabilities. It's far better to develop these technologies under democratic oversight with accountability mechanisms than to cede the field entirely.

Verification: The Impossible Problem

Arms control requires verification. The Washington Naval Treaty included inspection provisions. Nuclear arms control developed elaborate verification regimes such as satellite imagery, seismic monitoring, on-site inspections.

How do you verify AI development compliance? You can't. Many labs are private. Many algorithms are black boxes. Research is dual-use. Code is easily hidden. There's no AI equivalent to counting missile silos.

What prevents a nation-state from maintaining parallel research programs: one compliant and visible, one advanced and classified? The moment verification becomes impossible, arms control becomes unilateral restraint. We're not securing a mutual agreement for limitation, we're limiting ourselves while hoping that our competitors do the same.

That's not strategy. That's wishful thinking backed by historical ignorance.

The Pattern We're Missing

The Washington Naval Treaty. The London Naval Treaty. The Kellogg-Briand Pact's attempt to outlaw war itself. Each represented sophisticated rationalist thinking that collapsed when confronted with reality. Nations facing no constraints exploited the limitations accepted by those who did.

We're now applying this failed framework to AI, a domain where the conditions for successful arms control are even less present than they were for naval vessels. The gap between what democracies permit themselves to develop and what authoritarian competitors are actually developing grows daily, and we won't know the full extent until strategic surprise arrives.

Advocates for AI bans argue that some capabilities are simply too dangerous to exist. Perhaps. But "too dangerous for democracies to develop while authoritarian regimes develop them anyway" is not a policy, it's a recipe for defeat dressed up as moral leadership.

What This Means For...

Policymakers: Arms control frameworks assume symmetric constraints and verification mechanisms. AI possesses neither. Banning specific AI applications in democracies while authoritarian competitors face no limits is strategic self-handicapping dressed as responsible governance. Evidence-based risk management beats aspirational prohibition.

U.S. strategic competition: Every AI capability we ban through domestic regulation is a capability China develops without constraint. The Washington Naval Treaty should remind us that unilateral restraint in competitive domains doesn't demonstrate moral leadership, it demonstrates strategic naivety. We're building treaty-compliant battleships while competitors build Yamatos.

Tech companies: Autonomous systems development in the US faces growing regulatory pressure to self-limit certain applications. Meanwhile, Chinese competitors operate under no equivalent constraints. The competitive disadvantage compounds over time.

Aspiring strategic thinkers: Arms control works for observable, countable weapons systems when all major powers participate and face equivalent constraints. It fails catastrophically when applied to invisible, dual-use capabilities where only some parties self-restrict. The Washington Naval Treaty's failure wasn't anomalous, it was the predictable outcome of assuming adversaries would accept the same constraints we imposed on ourselves.