Trend AnalysisPhilosophy & Ethics

Philosophy of Information and Epistemic Justice in the AI Era

Philosophy of information examines the nature, dynamics, and utilization of information as a fundamental feature of reality. When combined with the concept of epistemic justice, the capacity of indivi...

By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

Why It Matters

Philosophy of information examines the nature, dynamics, and utilization of information as a fundamental feature of reality. When combined with the concept of epistemic justice, the capacity of individuals and groups to participate as full knowers in knowledge-producing practices, the field reveals profound structural inequalities in how AI systems generate, validate, and distribute knowledge.

Kay, Kasirzadeh, and Mohamed (2024) demonstrate that generative AI can systematically undermine collective knowledge and the processes we rely on to assess and trust information. This is not merely a technical bias problem; it is an epistemic injustice problem. When AI systems trained on predominantly English-language, Western-centric corpora become the primary interfaces through which billions access information, they silently reshape what counts as knowledge, whose experiences are represented, and which questions are considered worth asking.

The philosophical urgency intensified in 2024-2025 as Safir, McInerney, and Blackwell (2025) published a comprehensive analysis of over 5,700 AI ethics publications, revealing stark North-South disparities in who produces knowledge about AI ethics itself. The field that is supposed to address power asymmetries in AI reproduces those very asymmetries in its own knowledge production practices. This recursive injustice demands philosophical attention.

The Debate

Miranda Fricker's Framework Extended to AI

The foundational work on epistemic injustice distinguishes between testimonial injustice (when a speaker's credibility is unfairly diminished) and hermeneutical injustice (when gaps in collective interpretive resources prevent people from making sense of their experiences). Kraft and Soulier (2024) extends this taxonomy to AI contexts, identifying a new category: generative hermeneutical erasure, where AI systems actively eliminate or distort the interpretive frameworks of marginalized groups by generating dominant-culture explanations as default.

Situated Knowledge and the Bias-Proof Illusion

Mollema (2025) challenge the assumption that knowledge-enhanced language models are more objective than their standard counterparts. Drawing on feminist epistemology and the concept of situated knowledge, they show that retrieval-augmented generation systems simply relocate bias from model weights to knowledge bases. The sources deemed authoritative enough to include in a knowledge base reflect the same power structures that shaped the training data. Objectivity is not achieved by adding a knowledge layer; it is merely disguised.

The Pipeline of Epistemic Harm

Safir et al. (2025) maps epistemic injustice across the entire generative AI pipeline: data collection (whose experiences are recorded), curation (whose perspectives are filtered out), training (which patterns are amplified), fine-tuning (whose preferences shape outputs), and deployment (who has access and on what terms). Each stage introduces distinct forms of epistemic harm, requiring stage-matched governance rather than one-size-fits-all solutions.

Global South and Knowledge Production Asymmetries

Safir, McInerney, and Blackwell (2025) find that AI ethics research is overwhelmingly produced in the Global North, yet its normative frameworks are applied globally. This creates a form of epistemic colonialism where the ethical standards governing AI in Africa, Southeast Asia, and Latin America are set by researchers who may not understand local epistemic traditions, values, or power structures.

Taxonomy of AI Epistemic Injustice

<
TypeDefinitionAI ManifestationExample
TestimonialCredibility deficit due to identityTraining data underrepresents marginalized voicesMedical AI dismisses symptom descriptions from minority patients
HermeneuticalGaps in shared interpretive resourcesAI lacks concepts for non-Western experiencesSentiment analysis fails on culturally specific expressions
Generative erasureAI actively overwrites marginalized frameworksLLMs replace indigenous knowledge with Western equivalentsChatbot "corrects" traditional medicine inquiries to biomedical framing
DistributiveUnequal participation in knowledge productionGlobal South excluded from AI ethics researchGovernance frameworks designed without affected community input

What To Watch

The emerging field of "epistemic AI auditing" aims to evaluate AI systems not just for statistical bias but for deeper epistemic justice concerns. Watch for the development of evaluation benchmarks that measure hermeneutical adequacy across cultures, participatory design methodologies that include Global South stakeholders as co-designers rather than subjects, and philosophical frameworks that move beyond Western analytic epistemology to incorporate diverse knowledge traditions.

References (4)

Epistemic Injustice in Generative AI.
Kraft, A., & Soulier, E. (2024). Knowledge-Enhanced Language Models Are Not Bias-Proof: Situated Knowledge and Epistemic Injustice in AI. The 2024 ACM Conference on Fairness, Accountability, and Transparency, 1433-1445.
Mollema, W. J. T. (2025). A taxonomy of epistemic injustice in the context of AI and the case for generative hermeneutical erasure. AI and Ethics, 5(5), 5535-5555.
Safir, A. H., McInerney, K., Blackwell, A. F., & Debnath, R. (2025). Distributive Epistemic Injustice in AI Ethics: A Co-productionist Account of Global North-South Politics in Knowledge Production. Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 2009-2024.

Explore this topic deeper

Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

Click to remove unwanted keywords

Search 6 keywords β†’