The White House AI Framework: Writing Rules About Ourselves Amidst Global Competition

The White House AI Framework: Writing Rules About Ourselves Amidst Global Competition

All views in this newsletter are my own and do not represent the views of The R Street Institute, the US Navy, or any other organization I am affiliated with.


Between 1935 and 1937, Congress passed three Neutrality Acts, with each iteration carrying more details. These contained arms embargoes, loan prohibitions, cash-and-carry provisions, and travel warnings for Americans in war zones. The legislative craftsmanship was genuine, but the strategic logic fell short.

While Congress was focused on regulating American behavior with increasing precision, Germany remilitarized the Rhineland, absorbed Austria, and formalized their alliances with Italy and Japan. The competition was reorganizing the world while America was writing rules about itself.

The strategic failure of the Neutrality Acts doesn't depend on whether isolationism was a rational or irrational policy. The strategic failure came from focusing on addressing a domestic question while the strategic competition was happening internationally. Every Senate debate about cash-and-carry provisions was a debate America was having with itself. The Axis was not bound by American neutrality law. It was building the military-industrial capacity, the alliance architecture, and the territorial position that would determine the outcome regardless of what Congress passed. By the time the strategic situation clarified in December 1941, the competition had been running for six years without American participation.

Today, the White House released its seven section National Policy Framework for Artificial Intelligence. The first six sections are entirely domestic: children's protections, community safeguards, intellectual property, free speech, innovation policy, and workforce development. The seventh addresses federal preemption of state laws. International competition appears once, in a subordinate clause, as a justification for why states should not regulate AI development.

Out of seven sections, international competition receives just one subordinate clause. My excitement that this would somehow be a Grand Strategy for AI competition the day after I finished an eight-part series didn't survive first contact with the document. While it remains unaddressed, the United States still needs a Grand Strategy for AI competition – especially if politicians continue to frame AI development in terms of global competition.


While it doesn't address international competition, the domestic provisions surrounding federal preemption are genuinely good policy. Preventing the emergence of a fragmented patchwork of state AI laws addresses a real problem: inconsistent state regulation imposes fixed compliance costs that favor large incumbents over smaller competitors. The innovation provisions, regulatory sandboxes, federal datasets in AI-ready formats, sector-specific regulation through existing agencies rather than a new AI rulemaking body, reflect a market-friendly approach consistent with evidence-based governance.

The real issue with this document is what it omits entirely.

China is not competing with America by writing better domestic AI governance. Beijing is flooding international standards bodies with proposals, with 145 new ITU submissions in 2021 alone, six times the Western rate. Chinese firms are building telecom infrastructure across Africa, data centers across Southeast Asia, and digital dependencies across the developing world through the Digital Silk Road. DeepSeek, Llama, and Mistral are eroding the ecosystem lock-in that made American platform dominance durable. The Commerce Department recently announced that it will begin accepting proposals for full-stack AI export packages to allied nations (the closest the administration has come to an affirmative international strategy) and the framework does not mention it.

The Neutrality Acts weren't stupid legislation. They reflected genuine public concern about entanglement in European conflicts, real economic interests, and legitimate constitutional debates. They failed strategically because they addressed the wrong problem. America's challenge in 1937 was not regulating its own behavior more carefully. These efforts drew American attention inward at the exact moment that the competition required outward engagement.

The National Policy Framework reflects the same error. American AI governance matters. A minimally burdensome national standard, strong innovation incentives, and sensible preemption of destructive state regulation all contribute to competitive position. But domestic governance is a supporting pillar, not a center of gravity. The center of gravity in AI competition is who writes the standards others adopt, whose infrastructure becomes the default in developing nations, and whose platforms allies build their ecosystems on.

The framework is silent on all three.

Congress passed the last Neutrality Act in 1937. By 1941, the strategic situation had clarified considerably. The question for American AI strategy is whether clarity arrives before or after the competition has been decided elsewhere.


This piece serves as a coda to my Grand Strategy for AI Competition series. The final installment, examining Networked Technological Leadership and the Marshall Plan analogy, is here.