Deep DiveAI National Policies

The Classification Battle: Who Decides Which AI Systems Are Too Dangerous to Deploy?

The EU AI Act's risk classification system determines which AI applications are prohibited, which require extensive compliance, and which face no specific regulation. This classification — the boundar...

By OrdoResearch
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

The EU AI Act's risk classification system determines which AI applications are prohibited, which require extensive compliance, and which face no specific regulation. This classification — the boundary between "high-risk" and "limited-risk," between "prohibited" and "permitted" — is where the Act's abstract principles meet concrete economic interests. Every company with a product near the boundary has a stake in where the line falls, and the battle over classification is becoming the central arena of EU AI governance.

Fundamental Rights Impact Assessment

Pusztahelyi (2025), in Curentul Juridic, reflects on the fundamental rights impact assessment requirements for high-risk AI under the Act. The assessment framework requires deployers of high-risk AI to evaluate how their system affects fundamental rights — non-discrimination, privacy, freedom of expression, human dignity — before deployment. But the methodology for conducting such assessments is not specified in the Act, creating interpretive space that different member states and organizations are filling differently.

The challenge is methodological: how do you assess, before deployment, whether an AI system will discriminate against protected groups, violate privacy expectations, or undermine human dignity? The assessment requires predicting social impacts from technical properties — a translation that involves substantial uncertainty and judgment. Two assessors evaluating the same system may reach different conclusions, not because one is wrong but because the assessment inherently involves value judgments that technical analysis cannot resolve.

Constitutional Analysis

Baek (2025), in the European Constitutional Law Association journal, conducts a constitutional analysis comparing the EU AI Act's approach with Korea's AI Framework Act. Both adopt risk-based classification, but they differ in how they define risk categories, what rights they prioritize, and how they balance innovation promotion with rights protection. The EU Act emphasizes negative rights (protection from harm) while the Korean Act emphasizes both negative rights and positive rights (access to AI benefits).

The comparison reveals that risk classification is not a neutral technical exercise but a political one — it reflects choices about which risks matter most, whose interests are prioritized, and what level of residual risk society is willing to accept. These choices are embedded in constitutional traditions, legal cultures, and political economies that differ across jurisdictions.

Operationalizing Assessment

Ceravolo et al. (2025) propose HH4AI, a methodological framework for AI human rights impact assessment. Their contribution is to operationalize what the AI Act requires but does not specify: a structured methodology for evaluating AI systems against human rights standards. The framework includes stakeholder identification (who might be affected?), rights mapping (which rights are relevant?), impact analysis (how might the system affect those rights?), and mitigation planning (what safeguards can reduce identified risks?).

The framework also addresses a practical gap in current assessment practice: the difference between intended use and foreseeable misuse. A facial recognition system intended for building access control might foreseeably be repurposed for protest surveillance. A content moderation AI intended to remove illegal content might foreseeably suppress legitimate speech. Impact assessment must evaluate not just the designed function but the plausible scope of deployment — and the boundary between foreseeable and unforeseeable use is itself a contested judgment.

The classification battle will intensify as the AI Act moves from legislation to enforcement. Every classification decision creates winners and losers — companies whose products fall on the favorable side of the boundary and those whose products face additional compliance costs. The methodology for making these decisions, the institutions that make them, and the appeals processes available to those who disagree will determine whether the AI Act achieves its goal of risk-proportionate regulation or becomes a source of regulatory uncertainty that disadvantages European AI development.

The Lobbying Dimension

Behind the technical debates about classification methodology lies an intense lobbying effort. AI companies, industry associations, civil society organizations, and member state governments all seek to influence where classification boundaries fall. Companies with products near the high-risk threshold lobby for narrower definitions of high-risk categories. Civil society organizations lobby for broader definitions that capture more AI applications under enhanced oversight.

This lobbying dynamic is not inherently problematic — it is how democratic governance incorporates diverse interests. But it becomes problematic when lobbying operates through opaque channels, when industry expertise is confused with independent analysis, and when the technical complexity of AI classification allows interested parties to frame self-serving arguments as neutral technical judgments. The credibility of the classification system depends on transparent governance processes that acknowledge and manage these competing interests.

The classification battle is ultimately about values, not technology. Whether a particular AI system is "high-risk" depends not just on its technical capabilities but on the social context in which it operates, the population it affects, and the rights it implicates. These are value judgments that should be made through democratic processes with appropriate expertise, not delegated to technical committees operating without public accountability.


References

  • Pusztahelyi, R. (2025). Reflections on FRIA for High-Risk AI. Curentul Juridic. DOI:10.62838/cjjc-2024-0059
  • Baek, S. (2025). Constitutional Analysis: EU AI Act and Korea's AI Framework Act. ECLA. DOI:10.21592/eucj.2025.48.189
  • Ceravolo, P. et al. (2025). HH4AI: Methodological Framework for AI Human Rights Impact Assessment. arXiv. arXiv:2503.18994
  • References (3)

    Pusztahelyi, R. (2025). Reflections on FRIA for High-Risk AI. Curentul Juridic. [DOI:10.62838/cjjc-2024-0059]().
    Baek, S. (2025). Constitutional Analysis: EU AI Act and Korea's AI Framework Act. ECLA. [DOI:10.21592/eucj.2025.48.189]().
    Ceravolo, P. et al. (2025). HH4AI: Methodological Framework for AI Human Rights Impact Assessment. arXiv. [arXiv:2503.18994](https://arxiv.org/abs/2503.18994).

    Explore this topic deeper

    Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

    Click to remove unwanted keywords

    Search 1 keywords →