Deep DiveAI National Policies

The Innovation-Regulation Paradox: Why Withdrawing AI Rules Made Things Worse, Not Better

In late 2024, the European Commission withdrew the proposed AI Liability Directive — the companion legislation to the AI Act that would have established clear rules for liability when AI systems cause...

By OrdoResearch
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

In late 2024, the European Commission withdrew the proposed AI Liability Directive — the companion legislation to the AI Act that would have established clear rules for liability when AI systems cause harm. The withdrawal was framed as deregulation, a response to industry concerns that excessive regulation was hampering European AI competitiveness. But the withdrawal did not eliminate liability questions — it merely left them to be resolved through 27 different national legal systems, each applying its own interpretation of existing product liability and tort law to AI-related harm. The result is not less regulation but more legal uncertainty.

The Fragmentation Problem

The withdrawal of the AI Liability Directive created precisely the legal fragmentation that harmonized EU legislation is designed to prevent. A company deploying an AI system across multiple EU member states now faces different liability regimes in each jurisdiction. In some countries, strict liability may apply to AI-caused harm. In others, fault-based liability requires proving negligence. In still others, the legal status of AI-caused harm has not been tested in court at all.

This fragmentation imposes costs that arguably exceed those of the proposed directive. Companies must seek legal advice in every jurisdiction they operate in. Victims of AI-caused harm face different prospects for compensation depending on which country they live in. And the legal uncertainty discourages precisely the cross-border AI deployment that the EU's Digital Single Market strategy is designed to promote.

The Competitiveness Argument

The competitiveness argument for deregulation assumes that less regulation means more innovation. But the evidence from other technology domains suggests a more complex relationship. Clear, predictable regulation can promote innovation by creating stable expectations, establishing consumer trust, and preventing races to the bottom on safety. Unclear or fragmented regulation can hinder innovation by increasing legal risk, raising compliance costs, and deterring investment in markets where the rules are unpredictable.

The EU's own experience with GDPR is instructive. When GDPR was proposed, industry argued it would cripple European technology competitiveness. After implementation, GDPR became a global standard that provided European companies with a competitive advantage in trust-sensitive markets. The AI Act may follow a similar trajectory — initially perceived as a burden, eventually recognized as a competitive asset.

The Missing Piece

The innovation-regulation paradox reveals a deeper issue: the distinction between good regulation and bad regulation matters more than the quantity of regulation. Regulation that is clear, predictable, proportionate, and adaptable supports innovation by providing a stable framework for investment and development. Regulation that is vague, burdensome, rigid, or unpredictable hinders innovation by creating uncertainty and compliance costs that disproportionately affect smaller companies and newer entrants.

The EU's challenge is not to regulate less or more but to regulate well — creating a framework that provides legal certainty for developers, meaningful protection for users, and sufficient flexibility to accommodate technological change. The withdrawal of the AI Liability Directive moved European AI governance away from this goal, not toward it. The regulation that was withdrawn may have been imperfect, but its absence is worse.

Lessons from Adjacent Domains

The pharmaceutical industry offers a useful parallel. Drug regulation is among the most stringent in any sector — extensive clinical trials, mandatory safety reporting, post-market surveillance. Yet pharmaceutical innovation has not been destroyed by regulation; it has been shaped by it. Companies invest in the areas where the regulatory pathway is clearest, and they design products to meet regulatory requirements from the outset. The cost of compliance is substantial but predictable, and predictability enables long-term investment.

AI regulation could follow a similar trajectory if it achieves the same combination of stringency and predictability. The current problem is not that AI regulation is too strict but that it is too uncertain — companies do not know which rules will apply, how they will be interpreted, or how enforcement will work in practice. Resolving this uncertainty, even through strict regulation, would provide a more favorable environment for innovation than the current fragmented landscape.

The lesson is counterintuitive but well-supported by evidence from other sectors: the relationship between regulation and innovation is not linear (more regulation = less innovation) but U-shaped. Too little regulation creates uncertainty and races to the bottom. Too much regulation creates rigidity and compliance burdens. The optimal point — clear, proportionate, adaptive regulation — promotes innovation by creating stable expectations and consumer trust. The EU's challenge is to find this optimal point for AI.

The withdrawal also created a signaling problem. By retreating from AI liability legislation under industry pressure, the Commission signaled that regulatory commitments in AI governance are negotiable. This undermines the credibility of remaining regulations and may encourage further lobbying for deregulation. The precedent — that sufficiently organized industry opposition can roll back proposed AI governance measures — has implications beyond liability for the entire EU AI governance architecture.


References

  • EU AI Liability Directive withdrawal analysis — multiple European legal scholars, 2025. Google Scholar
  • Comparative analysis of national AI liability regimes across EU member states. Google Scholar
  • Historical precedent: GDPR's evolution from perceived burden to competitive advantage. Google Scholar
  • References (3)

    EU AI Liability Directive withdrawal analysis — multiple European legal scholars, 2025. [Google Scholar](https://scholar.google.com/scholar?q=EU%20AI%20Liability%20Directive%20withdrawal%20analysis%20%E2%80%94%20multiple%20European%20legal%20scholars).
    Comparative analysis of national AI liability regimes across EU member states. [Google Scholar](https://scholar.google.com/scholar?q=Comparative%20analysis%20of%20national%20AI%20liability%20regimes%20across%20EU%20member%20states.).
    Historical precedent: GDPR's evolution from perceived burden to competitive advantage. [Google Scholar](https://scholar.google.com/scholar?q=Historical%20precedent%3A%20GDPR%27s%20evolution%20from%20perceived%20burden%20to%20competitive%20adva).

    Explore this topic deeper

    Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

    Click to remove unwanted keywords

    Search 1 keywords →