The EU AI Act entered force with prohibited practices taking effect in February 2025 and general-purpose AI provisions following in August 2025. The law is comprehensive, ambitious, and — according to mounting evidence — not being implemented as designed. The gap between the regulation as written and compliance as practiced is the defining challenge of European AI governance in 2025-2026.
Enforcement Design Patterns
Soderlund and Larsson (2024), in Digital Society, analyze the enforcement design patterns embedded in the EU AI Act and find structural tensions that complicate implementation. The Act creates a multi-level enforcement architecture — the European AI Office for cross-border and GPAI issues, national competent authorities for domestic oversight, and market surveillance authorities for product compliance. But the capacities, mandates, and institutional cultures of these bodies vary enormously across member states.
Some member states have designated existing data protection authorities as AI regulators, leveraging GDPR enforcement experience. Others have created new institutions with dedicated AI mandates. Still others have not yet designated a competent authority at all, creating enforcement vacuums in jurisdictions where AI development is actively occurring. The result is regulatory patchwork within a supposedly harmonized framework — the very outcome the AI Act was designed to prevent.
Interdisciplinary Governance
Zhong (2024), in AI Magazine, argues that effective implementation of the EU AI Act requires interdisciplinary governance capacities that most regulatory bodies currently lack. The Act's risk-based approach requires assessors who understand both the technical properties of AI systems and their social impacts — a combination of expertise that is scarce. A high-risk classification decision, for instance, requires understanding how a machine learning model processes data (technical) and how that processing might affect fundamental rights in a specific deployment context (social, legal, and ethical).
The competence gap is particularly acute for general-purpose AI models, where the AI Office must assess systemic risks. What constitutes a systemic risk from a foundation model? How should capability evaluations be conducted? What thresholds trigger additional obligations? These questions require a form of expertise — at the intersection of ML research, safety engineering, and regulatory science — that is genuinely novel and in extremely short supply.
National Competent Authorities
Parisini and Dervishaj (2025) examine the emerging models of national competent authorities under the AI Act. Their analysis reveals substantial variation in institutional design: some countries are creating independent AI authorities (modeled on data protection authorities), others are embedding AI oversight within existing digital regulators, and still others are distributing AI governance responsibilities across multiple agencies. Each model has trade-offs between independence, expertise, efficiency, and coordination capacity.
The implementation gap is not primarily a failure of political will but a capacity problem. The AI Act asks regulatory institutions to do things they have never done before — classify AI systems by risk, audit algorithmic decision-making, evaluate foundation model capabilities, and enforce compliance across a rapidly evolving technology landscape. Building the institutional capacity to do this well will take years, and the pace of AI development is not waiting.
The SME Burden
The implementation gap is particularly acute for small and medium-sized enterprises, which constitute the vast majority of EU businesses. Large technology companies can absorb compliance costs through dedicated legal and compliance teams. SMEs lack these resources and face a disproportionate burden from complex regulatory requirements.
The AI Act includes provisions for SME support — including priority access to regulatory sandboxes, simplified compliance templates, and proportionate requirements for certain categories of AI systems. But these provisions must be operationalized by member states, and the quality and availability of SME support varies significantly across the EU. An Italian startup developing a medical AI has access to different support resources than a Finnish startup developing the same product, despite both operating under the same legislative framework.
The risk is that the AI Act inadvertently concentrates AI development in large companies that can afford compliance while excluding smaller innovators from high-risk categories where the compliance burden is greatest. This outcome would contradict the EU's stated goal of maintaining a competitive and diverse AI ecosystem. Preventing it requires not just regulatory support but fundamental choices about how compliance obligations are calibrated to organizational capacity.
The implementation gap is, in the end, a measurement of the distance between legislative ambition and institutional reality. The AI Act sets standards that represent best practice in AI governance. Meeting those standards requires institutional capacities — technical expertise, organizational flexibility, cross-functional coordination — that most regulatory bodies and most regulated organizations are still developing. Closing this gap is the work of years, not months.