Governments write AI regulations, fund AI research, and promote AI adoption — and then struggle to adopt AI themselves. This irony is documented across multiple countries: the public sector that governs AI is among the slowest to implement it. The reasons are structural, not accidental, and they reveal fundamental tensions between how governments operate and what AI adoption requires.
The Adoption Paradox
Aninakwah (2026), in the Journal of Information Technology, examines digital transformation and AI adoption in government. The study finds that public sector AI adoption faces barriers that do not exist — or exist in much weaker forms — in the private sector. Procurement regulations designed to ensure fairness and accountability create lengthy, rigid acquisition processes ill-suited to rapidly evolving AI technologies. By the time a government agency completes procurement, the technology it specified may be obsolete.
Civil service structures create additional barriers. AI implementation requires cross-functional teams combining technical, domain, and policy expertise. Government organizations are typically structured in functional silos with limited mobility between them. Building the interdisciplinary teams that AI projects require means working against established organizational structures, career paths, and incentive systems.
The trust deficit compounds these structural barriers. Citizens hold government AI to higher accountability standards than private sector AI — understandably, since government decisions affect rights, benefits, and legal status. This higher standard creates a risk-averse culture where the potential costs of AI failure (public outcry, legal challenges, political consequences) loom larger than the potential benefits of AI adoption.
Expanding Adoption
Alamaki (2025), in Transforming Government, examines strategies for expanding AI adoption in public sector organizations. The most effective approaches do not try to transform government AI capability in a single initiative but build capability incrementally through pilot projects that demonstrate value, develop expertise, and build organizational confidence.
The study identifies a critical success factor: executive sponsorship that is sustained rather than episodic. AI adoption in government fails most often not because pilot projects perform poorly but because leadership attention moves to other priorities before pilots can be scaled. The gap between pilot success and organizational adoption is where most public sector AI initiatives die — a valley of death that requires persistent executive commitment to cross.
China's Public Sector Challenges
Li and Segumpan (2025) examine AI adoption challenges in China's public sector, revealing that even in a country with strong state-directed technology policy, public sector AI adoption encounters significant obstacles. Local government capacity varies enormously across China's diverse regions. Officials in coastal technology hubs have access to AI expertise and infrastructure; officials in inland regions may lack both. The national policy that mandates AI adoption does not automatically create the local capacity to implement it.
The Chinese experience illustrates a general principle: AI adoption in government is not primarily a technology problem but a capacity problem. The technology exists. The policies exist. What is often missing is the organizational capacity — the skilled people, flexible processes, and institutional culture — to translate technology and policy into functioning systems that serve citizens effectively.
The common pattern across all three contexts is that government AI adoption requires institutional reform, not just technology procurement. Governments that succeed in AI adoption will be those that reform their procurement, workforce, and organizational structures to accommodate the iterative, cross-functional, and continuously evolving nature of AI implementation. Those that attempt to insert AI into unreformed institutional structures will continue to lag, regardless of how much they invest in the technology itself.
Cultural and Political Barriers
Beyond structural obstacles, public sector AI adoption faces cultural and political barriers that technical solutions cannot address. The political risk of AI failure in government is asymmetric: a successful AI deployment generates modest positive attention, while a failed deployment generates intense negative coverage. This asymmetry creates a rational incentive for risk aversion — the career consequences of being associated with an AI failure far outweigh the rewards of a successful implementation.
The cultural barrier is related: government organizations value consistency, predictability, and equal treatment. AI systems, by their nature, produce probabilistic outputs that may vary across cases. A benefits determination system that uses AI to assess eligibility must explain why seemingly similar cases receive different outcomes — a requirement that is trivially satisfied by rule-based systems but genuinely challenging for machine learning models.
The path forward likely involves changing not just technology but institutional culture — developing organizational capacities for experimentation, learning from failure, and managing the productive tension between consistency requirements and the probabilistic nature of AI outputs. This cultural change is slower and harder than technology adoption, and it requires sustained leadership commitment that outlasts individual political cycles.