Deep DiveAI National Policies

The Regulatory Trilemma: How Three Superpowers Are Building Incompatible AI Governance Systems

The US prioritizes innovation, the EU prioritizes rights, China prioritizes state control. These three AI governance frameworks are diverging, not converging — forcing global companies to navigate incompatible regulatory logics simultaneously.

By OrdoResearch
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

The United States, European Union, and China are constructing three distinct frameworks for governing artificial intelligence, and these frameworks are diverging rather than converging. The US prioritizes innovation through light-touch regulation and voluntary commitments. The EU prioritizes rights through comprehensive legislation and risk classification. China prioritizes state control through sector-specific rules and strategic deployment mandates. Each approach reflects deep institutional values that cannot be easily reconciled, and companies operating globally must now navigate three regulatory logics simultaneously.

The Transatlantic Divergence

Birchfield (2024), in the Journal of European Integration, traces the evolution from transatlantic roadmaps for AI governance toward fundamentally different regulatory architectures. The early optimism that the US and EU would converge on shared AI governance principles has given way to recognition of structural divergence. The EU AI Act creates legally binding obligations organized around risk categories, with prohibited practices, high-risk requirements, and transparency obligations. The US approach relies on executive orders, voluntary industry commitments, and sector-specific guidance from existing agencies, avoiding comprehensive legislation.

The divergence is not merely procedural but philosophical. The EU treats AI governance as a rights issue — the AI Act is anchored in fundamental rights protections and extends the European tradition of precautionary regulation to algorithmic systems. The US treats AI governance as an innovation issue — regulations are evaluated primarily by their impact on competitiveness and technological leadership. These framings produce different answers to the same questions: should facial recognition be banned (EU: in most public spaces, yes) or regulated (US: case by case)? Should foundation model providers bear compliance obligations (EU: yes, under the GPAI provisions) or self-regulate (US: through voluntary safety commitments)?

Comparative Governance Architectures

Kulothungan and Gupta (2025), presenting at IEEE BigDataSecurity, provide a systematic comparison across the US, EU, and Asian governance frameworks. Their analysis reveals that the differences extend beyond regulatory philosophy to institutional design. The EU creates new institutions — the European AI Office, national competent authorities, standardization bodies — with dedicated mandates for AI governance. The US distributes AI oversight across existing agencies — the FTC for consumer protection, the FDA for medical devices, the SEC for financial applications — without creating a central AI authority. Asian approaches vary widely, from Singapore's pragmatic Model AI Governance Framework to South Korea's comprehensive AI Framework Act.

The practical consequence for multinational organizations is regulatory fragmentation. A facial recognition system that is legal in the US, prohibited in the EU, and encouraged in China cannot be governed by a single compliance framework. Organizations must develop jurisdiction-specific governance architectures, increasing costs and complexity while reducing the speed at which AI products can be deployed globally.

Digital Geopolitics

Barac and Lopez Rodriguez (2026), in the International Review of Economic Policy, frame AI governance divergence as digital geopolitics — a competition for regulatory influence that parallels the competition for technological capability. The EU's strategy is to export its regulatory model through the "Brussels Effect," making the AI Act a de facto global standard because companies serving the European market must comply regardless of their home jurisdiction. China's strategy embeds AI governance within its broader technology sovereignty agenda, using regulation to channel AI development toward state priorities. The US strategy relies on market leadership — maintaining the position that American companies set global standards through commercial dominance rather than regulatory mandate.

The geopolitical dimension introduces a competitive dynamic that complicates cooperation. Each jurisdiction has incentives to calibrate its regulations to advantage domestic industry: the EU's strict rules may protect European citizens but also create barriers for non-European competitors; US light-touch regulation may foster innovation but also create accountability gaps; China's state-directed approach may accelerate strategic applications but also limit market access for foreign companies.

The Convergence Question

Whether these three regulatory traditions will eventually converge or continue diverging is the central question for the next decade of AI governance. The forces favoring convergence include the global nature of AI supply chains, the desire of multinational companies for regulatory consistency, and emerging multilateral forums like the G7 Hiroshima Process and the Global Partnership on AI. The forces favoring divergence include deep institutional differences, geopolitical competition, and the political difficulty of harmonizing regulations that reflect fundamentally different values about the relationship between technology, rights, and state power.

The most likely outcome is neither convergence nor divergence but layered governance — a system where international agreements set minimum standards while jurisdictions maintain distinctive regulatory approaches above those baselines. This is already the pattern in data protection (GDPR as the high-water mark, with other jurisdictions adopting compatible but not identical frameworks) and may be the template for AI governance as well.


References

  • Birchfield, V. L. (2024). From Roadmap to Regulation: Transatlantic Approach to Governing AI? J European Integration. DOI:10.1080/07036337.2024.2407571
  • Kulothungan, V. & Gupta, D. (2025). Towards Adaptive AI Governance: Comparative Insights from US, EU, Asia. IEEE BigDataSecurity. DOI:10.1109/BigDataSecurity66063.2025.00018
  • Barac, M. & Lopez Rodriguez, M. I. (2026). Digital Geopolitics: Regulatory Policy in AI in US, China and EU. IREP. DOI:10.7203/irep.7.2.32668
  • References (3)

    Birchfield, V. L. (2024). From Roadmap to Regulation: Transatlantic Approach to Governing AI? J European Integration. [DOI:10.1080/07036337.2024.2407571]().
    Kulothungan, V. & Gupta, D. (2025). Towards Adaptive AI Governance: Comparative Insights from US, EU, Asia. IEEE BigDataSecurity. [DOI:10.1109/BigDataSecurity66063.2025.00018]().
    Barac, M. & Lopez Rodriguez, M. I. (2026). Digital Geopolitics: Regulatory Policy in AI in US, China and EU. IREP. [DOI:10.7203/irep.7.2.32668]().

    Explore this topic deeper

    Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

    Click to remove unwanted keywords

    Search 6 keywords →