Critical ReviewPhilosophy & Ethics

Epistemic Injustice in AI Ethics: Who Gets to Define What's Fair?

AI ethics is shaped by who produces the research. A quantitative analysis of 5,755 publications reveals that Global South perspectives are systematically underrepresented—raising the question of whether current AI ethics frameworks encode the values of a narrow slice of humanity.

By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

AI ethics has become a global concern, but it is not a global conversation. The principles, frameworks, and standards that govern AI development are produced overwhelmingly by researchers and institutions in the Global North—primarily the United States, United Kingdom, and European Union. When these frameworks are exported as universal standards (through regulation, international agreements, or corporate policy), they carry with them the philosophical assumptions, cultural values, and policy priorities of their creators. The question is whether this matters—and if so, what can be done about it.

The Research Landscape

Quantifying the Knowledge Gap

Safir and Blackwell (2025), with 3 citations, provide the most rigorous quantitative evidence for epistemic inequality in AI ethics. Their study analyzes a comprehensive database of 5,755 scientific publications in AI Ethics (from 1960 to June 2024) from Web of Science, examining the geographic, institutional, and linguistic distribution of knowledge production.

Key findings:

  • Geographic concentration: Over 70% of AI ethics publications originate from institutions in North America and Western Europe. The entire African continent produces fewer AI ethics papers than any single Ivy League university.
  • Citation asymmetry: Papers from Global North institutions receive substantially more citations than comparable papers from Global South institutions—amplifying the visibility gap through algorithmic recommendation.
  • Conceptual dominance: The conceptual vocabulary of AI ethics (fairness, accountability, transparency, explainability) was developed primarily in Anglo-American philosophical traditions. Alternative framings from other traditions (Ubuntu's relational ethics, Islamic karamah, Confucian ren) appear in less than 5% of the literature.
The authors frame this as distributive epistemic injustice—a situation where the capacity to produce and disseminate knowledge is unequally distributed in ways that systematically disadvantage certain groups. This is not just a representation problem (too few Global South voices); it is a content problem (the conceptual frameworks themselves reflect particular cultural assumptions).

Decolonial AI Ethics in Global Health

De Matas (2025) examines how epistemic injustice plays out in a specific domain: AI applications in global health. The paper argues that contemporary AI health tools are shaped by "narrative logics"—assumptions about whose experiences count, whose perspectives define the problem, and whose futures are imagined in algorithmic solutions.

For example, an AI diagnostic tool trained on imaging data from US hospitals may perform poorly on patients from sub-Saharan Africa—not because of technical limitations but because the training data reflects the disease prevalence, equipment quality, and clinical presentation patterns of one population. When such tools are deployed globally (as "AI for global health"), they carry embedded assumptions about what "normal" looks like that may not apply.

De Matas proposes a decolonial framework that asks three questions of any AI health tool:

  • Whose data? Was the training data collected from the populations the tool will serve?
  • Whose definitions? Do the diagnostic categories reflect universal biomedical standards or culturally specific definitions of disease?
  • Whose benefit? Who profits from the tool's deployment, and who bears the risks of its errors?
  • Bibliometric Evidence of Bias

    Candra and Piter (2025) extend the analysis with a bibliometric study of AI use across the social sciences, confirming that the epistemic inequality identified by Safir et al. extends beyond AI ethics per se to AI applications throughout the social sciences. Their analysis finds that AI research in social science contexts concentrates in a small number of high-income countries and that ethical challenges (algorithmic bias, social injustice, epistemic inequality) are acknowledged but rarely addressed substantively.

    Dignity-Based Alignment

    Zhao and Ren (2025), with 3 citations, provide a constructive complement by proposing that human dignity—rather than utilitarian optimization—should anchor algorithmic bias governance. Their framework, which embeds dignity constraints at data, model, and deployment levels, is designed to be culturally adaptable: different societies can define dignity according to their own philosophical traditions while maintaining a shared commitment to equal moral status.

    Critical Analysis: Claims and Evidence

    <
    ClaimEvidenceVerdict
    AI ethics knowledge production is concentrated in the Global NorthSafir et al.'s analysis of 5,755 publications✅ Supported — over 70% from North America/Western Europe
    Citation patterns amplify the visibility gapSafir et al.'s citation analysis✅ Supported
    AI health tools carry embedded cultural assumptionsDe Matas's analysis of training data and diagnostic categories✅ Supported — specific mechanisms identified
    Alternative ethical traditions are underrepresented in AI ethics literatureSafir et al.'s conceptual analysis✅ Supported — less than 5% of literature uses non-Western frameworks

    Open Questions

  • Structural remedies: What institutional changes would reduce epistemic injustice in AI ethics? Possibilities include funding mandates for Global South research, journal policies requiring diverse authorship, and conference structures that prioritize underrepresented perspectives.
  • Translation vs. transformation: Should the goal be to translate existing AI ethics frameworks into different cultural contexts, or to build fundamentally different frameworks from different philosophical traditions?
  • Technology and access: Can digital tools (open access publishing, online conferences, multilingual platforms) reduce the structural barriers that Global South researchers face?
  • The "universal" question: Is there a universal core of AI ethics principles that transcends cultural differences, or are all ethical frameworks culturally situated? The answer shapes whether harmonization or pluralism should be the governance strategy.
  • What This Means for Your Research

    For AI ethics researchers, Safir et al.'s data is a mirror: if your field's knowledge base is drawn from 30% of the world's population, your conclusions may not apply to the other 70%.

    For policymakers, the implication is that AI governance frameworks developed in Brussels or Washington may face legitimacy challenges when applied in contexts where they were not designed.

    Explore related work through ORAA ResearchBrain.

    References (4)

    [1] Safir, A.H., McInerney, K., & Blackwell, A.F. (2025). Distributive Epistemic Injustice in AI Ethics: A Co-productionist Account of Global North-South Politics in Knowledge Production. Proc. ACM FAccT 2025.
    [2] De Matas, J. (2025). Reprograming the Narrative Machine: Toward a Decolonial Ethics of Artificial Intelligence in Global Health. Global Health Action.
    [3] Candra, P.H., Wadu, L.B., & Piter, R. (2025). Trends, Issues, and Ethical Challenges in the Use of Artificial Intelligence in Social Sciences: A Bibliometric Analysis. Proc. ICAIDES 2025, IEEE.
    [4] Zhao, Y. & Ren, Z. (2025). The Alignment of Values: Embedding Human Dignity in Algorithmic Bias Governance for the AGI Era. International Journal of Digital Law and Governance.

    Explore this topic deeper

    Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

    Click to remove unwanted keywords

    Search 7 keywords →