AI Grand Strategy Option 1: Preserve Democratic Technological Autonomy

AI Grand Strategy Option 1: Preserve Democratic Technological Autonomy

All views in this newsletter are my own and do not represent the views of The R Street Institute, the US Navy, or any other organization I am affiliated with.


On Monday I promised to examine eight grand strategy alternatives for AI competition. That's a huge lift with at least nine weeks of newsletters analyzing historical precedents, evaluating frameworks, and trying to figure out what actually makes sense for long-term technological competition between democracies and authoritarian regimes.

I figured I'd get started by doubling up this week. We're starting with the safest option: a defensive strategy focused on preserving what we have rather than aggressively expanding what we want.

In December 1823, President James Monroe delivered his seventh annual message to Congress. Most of it was forgettable diplomatic updates, the kind of routine reporting that filled nineteenth-century presidential addresses. Then, buried in the middle, came a sentence that would define American foreign policy for a century: "The American continents...are henceforth not to be considered as subjects for future colonization by any European powers."

Monroe was doing something strategically subtle: declaring that the Western Hemisphere would preserve the freedom to develop different political systems, free from European interference. He was preventing the European model of governance from becoming the only model.

The distinction matters. Monroe could have framed American strategy as defeating European power or imposing republican government on monarchies. Instead, he focused on preservation, maintaining space for different systems to coexist and compete. This defensive posture lasted because it aligned with what the United States could actually sustain. We couldn't defeat European powers militarily in 1823, but we could make expansion into our hemisphere costly enough that they pursued their interests elsewhere.

The first grand strategy option for AI competition follows this pattern. The formal version: "Preserve Democratic Technological Autonomy." Or just containment for the digital age. The political objective is ensuring democracies retain sovereign capability to develop, deploy, and govern AI systems according to our own values, while preventing authoritarian AI governance models from becoming the global default.

This is the obvious starting place for exploring AI grand strategy because the approach descends directly from the last American grand strategy that actually worked. The question is whether defensive preservation makes sense for AI competition, or whether defense guarantees we're always reacting to Chinese initiatives rather than shaping the competition ourselves.


Monroe's doctrine shaped European behavior in the Americas for decades, not because it prevented all European involvement. European powers just couldn't simply extend their colonial systems wholesale. They had to work around American resistance, which made expansion expensive and complicated. Meanwhile, American advantages in the hemisphere compounded: proximity, knowledge of terrain, growing economic ties, expanding population.

The strategic insight was recognizing you don't need to defeat an adversary to preserve freedom of action. You need to make their expansion costly while your own position strengthens. Monroe understood something that current AI policy seems to have forgotten: defensive strategies work when they create conditions where your advantages compound over time, not when they try to freeze competition in place.

Consider what Monroe was doing. He was ensuring that the European model wouldn't expand unchallenged into spaces where alternatives could develop. The U.S. Navy had 42 ships in 1823; the British Royal Navy had over 600. Monroe wasn't trying to match European military power ship-for-ship or attempting to prevent European powers from being European powers.

The parallel to AI competition is direct. We can't prevent China from developing AI capabilities. That prospect is so infeasible that we probably shouldn't even try. What we can do is ensure that democratic societies aren't forced to adopt authoritarian governance models for AI deployment. We can make our approaches attractive enough that countries choose democratic frameworks when they have real alternatives, not just Chinese systems or technological isolation.

This matters because the real threat is lock-in to authoritarian governance models as the default framework for deploying AI globally. If the only available systems come with surveillance-by-default, centralized control, and limited individual rights, democracies face expensive divergence or forced adoption of frameworks that contradict our values.

Preservation means preventing that scenario by maintaining viable democratic alternatives that countries actually want to adopt. We aren't destroying Chinese AI, we are instead keeping options open.


So what does this mean practically? Translating "preserve democratic technological autonomy" from abstract principle to actual policy requires defining what autonomy means and what preserves it.

Autonomy means something specific: democracies collectively retain capability to develop, deploy, and govern AI systems according to our values, without dependencies that give authoritarian regimes veto power over our choices. We can collaborate with each other, compete with China, and participate in international frameworks, but we maintain the sovereign capability to make independent choices when our values require it. Autarky would mean complete self-sufficiency where every democracy develops every AI capability domestically. That's neither achievable nor desirable. Isolation would mean cutting off research collaboration and talent flows, abandoning advantages democracies actually have. Comprehensive denial would mean preventing China from accessing any advanced AI capabilities, which is probably impossible anyway.

We need to ensure that critical AI chokepoints remain under democratic control or at minimum don't fall under authoritarian monopoly: foundational model architectures, key talent pipelines, governance frameworks, semiconductor supply chains.

Take semiconductor supply chains. The CHIPS Act is an attempt to resurrect domestic manufacturing capacity for advanced chips. That's one approach to autonomy by ensuring we can produce critical components domestically if needed. But pure autarky would mean building entire supply chains in every democratic country, which is economically insane and strategically unnecessary. Better approach: coordinate across democracies so collectively we maintain multiple pathways to advanced semiconductors, with no single point of failure controlled by authoritarian regimes.

Or consider AI talent pipelines. The United States attracts global talent through universities and immigration policy. That's an autonomy advantage because we get access to the world's best researchers regardless of where they were born. Preserving this advantage means maintaining the conditions that make democratic countries attractive destinations for talent: academic freedom, research funding, immigration pathways, and governance systems that don't require researchers to enable surveillance states.

Standards and governance frameworks matter differently. Here the threat is lock-in to authoritarian models through international standards bodies. If ITU standards or other frameworks assume centralized control, surveillance-by-default, and limited individual rights, then democracies face expensive divergence or forced adoption of frameworks that contradict our values. Preserving autonomy means ensuring democratic governance models remain viable within international standards, preventing authoritarian monopoly.

The common thread: autonomy requires capability and options. We need the ability to develop critical AI capabilities through democratic pathways, and we need enough redundancy that no single point of failure gives authoritarian regimes veto power over democratic choices.


Here's where the strategy faces its hardest test: coordination mechanisms. Containment had NATO, the Marshall Plan, clear institutional frameworks for coordinating democratic responses. What are the equivalents for AI?

The EU-U.S. Trade and Technology Council exists but lacks both authority and resources. We have no institutional equivalent of NATO for coordinating democratic approaches to AI development and governance. Export controls aren't synchronized. U.S. restrictions don't prevent access through European or Asian democracies, which means we're just pushing Chinese procurement through other channels. Research collaboration happens but without strategic coordination toward preserving collective autonomy.

This is the implementation gap. The strategy makes sense conceptually, but executing it requires institutional mechanisms we haven't built yet. NATO didn't exist in 1946 either. Truman and Marshall had to create it. The question is whether we have equivalent political will to build coordination mechanisms for AI competition.

What would such coordination actually look like? Some combination of governance approaches that work across different democratic systems, research collaboration frameworks that share advances while maintaining security, synchronized standards participation to ensure democratic voice in international bodies, and export control coordination so unilateral measures don't just shift Chinese procurement to other democracies.

None of this exists today. Building it would require sustained political investment across multiple administrations and genuine compromise between democracies with different regulatory approaches, economic priorities, and threat perceptions. That's hard. But so was building NATO. The question is whether we think preserving democratic autonomy in AI matters enough to invest in coordination mechanisms.

If the answer is no, if we're not willing to build institutional frameworks for democratic coordination, then this entire strategy is theoretical. Preservation without coordination is just each democracy pursuing independent policies that China can exploit through divide-and-conquer approaches.


Applying the five criteria we derived from containment's success, how does "Preserve Democratic Technological Autonomy" hold up?

On clear political objectives, this strategy succeeds. The objective is precise: ensure democracies retain sovereign capability to develop and govern AI according to their values, preventing authoritarian models from becoming the only viable option. You can explain it in one sentence. Success is measurable. Do democracies maintain autonomous capability to make independent choices about AI development and governance? That's vastly clearer than "win the AI race" while being more actionable than vague goals about promoting democratic values.

Institutional alignment is strong. Defensive strategies suit democracies better than aggressive expansion. This approach leverages what democracies do well: alliance coordination through voluntary cooperation, making our approaches attractive rather than coerced, patient resource allocation over decades, pluralistic governance allowing variation within democratic frameworks. The strategy doesn't require matching China's centralized direction or state-led mobilization, institutional characteristics democracies struggle to replicate and probably shouldn't try.

Sustainability looks good. Defensive objectives are easier to maintain politically across administrations than aggressive competition. Preserving our freedom to develop AI our way sustains support better than achieving AI dominance over China. The risk is defensive passivity. If preservation becomes mere reaction to Chinese initiatives rather than active defense, political will exhausts itself. Containment worked as active defense: Marshall Plan, NATO, cultural diplomacy. Passive defense that just reacts would fail the same way.

Coordination mechanisms present the biggest challenge. This strategy requires institutional frameworks that don't exist yet. The EU-U.S. TTC provides a starting point. Democratic AI coordination needs something approaching NATO's institutional strength with common standards, shared resources, and coordinated responses to threats. Building these mechanisms is achievable but requires political investment we haven't made.

Adaptive resilience depends on maintaining strategic coherence while adjusting tactics. Containment evolved from Kennan's original conception but kept its core objective constant. Can this AI strategy do the same? The test comes when China makes advances that challenge defensive positions. Do we maintain focus on preserving autonomy, or do we lurch between complacency when they're behind and panic when they're ahead? The risk with defensive strategies is they become reactive, always responding to adversary moves rather than shaping competition according to our own logic.


The case for this strategy is straightforward. The approach is the safest option, the most direct descendant of containment, the strategy most aligned with democratic institutional strengths. It offers clear political objectives where current policy has only slogans. It accepts the reality that we can't prevent Chinese AI development and focuses instead on preserving our own freedom of action.

The case against is equally clear. Safety has costs. This strategy accepts a divided AI world with democratic and authoritarian spheres that don't fully integrate. It risks reactive posture where we're always responding to Chinese moves rather than shaping competition ourselves. It requires building coordination mechanisms across democracies that would demand sustained political investment across multiple administrations.

Most fundamentally, the strategy assumes preservation is enough. That maintaining autonomous capability while China pursues aggressive expansion will produce acceptable outcomes over decades. Monroe Doctrine worked because American advantages in the Western Hemisphere compounded over time. Containment worked because democratic advantages compounded during the Cold War.

Will democratic advantages in AI compound if we focus on preservation? Or does AI competition reward aggressive expansion in ways defensive strategies can't counter? Does maintaining the ability to develop democratic AI systems matter if authoritarian systems become the global default through infrastructure dependencies, standards capture, and ecosystem lock-in?

I don't know the answer yet. That's why we're examining eight alternatives rather than declaring one correct. Next week we'll look at the opposite approach: "Innovation Ecosystem Dominance," making democratic AI development so attractive that restricting access becomes China's strategic vulnerability rather than ours.


What This Means For...

Policymakers: "Preserve Democratic Technological Autonomy" provides the clear political objective that current AI policy lacks, but executing it requires institutional mechanisms you haven't built yet. The EU-U.S. TTC provides a starting point. Real autonomy preservation requires something approaching NATO for AI with frameworks for coordinating democratic AI development, governance approaches, and standards engagement. If you're not willing to invest in building those mechanisms, don't pretend you're pursuing a defensive autonomy strategy. You're just improvising with a nicer-sounding name.

U.S. Strategic Competition: Defensive strategies succeed when they preserve genuine advantages while creating conditions for those advantages to compound over time. They fail when defense becomes passivity, when preserving status quo means losing ground to more dynamic competitors. If you choose this path, you need discipline to avoid lurching between complacency and panic when China makes advances. More critically, you need to build the alliance coordination that makes collective autonomy possible. Individual democracies preserving individual autonomy is just targets for Chinese divide-and-conquer approaches.

Tech Companies: A defensive autonomy strategy means more coordinated democratic approaches to AI governance, clearer frameworks for what capabilities democracies need to maintain, and likely more predictable regulatory environment within the democratic sphere. But it also means accepting fragmentation with democratic and authoritarian AI ecosystems that don't fully interoperate. Companies focused on global markets face harder choices about which ecosystem to prioritize, and straddling both becomes more expensive as the spheres diverge.

Aspiring Strategic Thinkers: Defensive strategies work when they leverage institutional strengths and create conditions where your advantages compound over time. They fail when defense becomes passivity or when preserving current position means losing ground to competitors with higher growth rates. The test for any defensive strategy is whether your defensive posture creates conditions for long-term advantage or just delays eventual disadvantage. Monroe Doctrine and Containment both passed that test. Does this strategy?


Next week: "Innovation Ecosystem Dominance," the offensive alternative that makes democratic AI development so attractive that restricting access becomes China's strategic vulnerability.