Critical ReviewPhilosophy & Ethics
Algorithmic Bias and Posthuman Ethics: When Data Shapes Identity
Algorithms do not just process data—they construct categories, constrain choices, and reshape what it means to be a person in a data-saturated world. Recent philosophical work explores how algorithmic bias intersects with identity, autonomy, and governance.
By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.
When a hiring algorithm filters résumés, a credit scoring system assigns risk levels, or a social media platform curates a news feed, the algorithm is not merely processing data—it is making decisions that affect real human lives. The technical response to the problems this creates is debiasing: adjust the training data, add fairness constraints, audit the outputs. But a growing body of philosophical work argues that the technical response, while necessary, is insufficient. The deeper questions are about what it means for human identity and autonomy when algorithmic systems increasingly determine the categories people are sorted into.
The Research Landscape: From Bias to Identity
Alyousef and Omari (2025), with 2 citations, provide an interdisciplinary examination of algorithmic bias that moves beyond the technical framing. Their paper brings together data science, philosophy, and cultural studies to argue that algorithmic bias is not simply a technical error to be corrected but a philosophical problem about the relationship between data and identity.
Their core argument: when algorithms classify people (as creditworthy or not, employable or not, healthy or at-risk), these classifications do not merely reflect pre-existing categories—they create new ones. A credit score is not a measurement of some objective property called "creditworthiness"; it is a construction that brings a social category into existence. Once the category exists, it constrains real possibilities: people denied credit cannot build credit history, reinforcing their classification.
This constructivist insight connects algorithmic bias to broader philosophical traditions. In posthuman philosophy (drawing on Braidotti, Haraway, and Barad), the boundaries between human and technological are understood as fluid and mutually constitutive. Alyousef and Omari extend this framework to argue that algorithmic classification represents a new form of this human-technology entanglement—one where the technology does not augment human capabilities but defines human categories.
Human-Centered AI: From Principle to Practice
Kriuk (2026) addresses the gap between AI ethics principles and their practical implementation through the lens of Human-Computer Interaction (HCI). The paper synthesizes contemporary research to propose an integrated framework where ethical principles are operationalized through design mechanisms.
The key insight is that ethical principles (fairness, transparency, accountability) are meaningless unless they are translated into specific design features: interfaces that explain decisions, feedback mechanisms that allow users to contest classifications, and audit systems that detect disparate impact. Kriuk argues that HCI provides the necessary bridge between abstract ethics and concrete design.
The framework distinguishes between:
- Pre-deployment ethics: Design choices that embed ethical constraints before the system is used (fairness constraints in training, privacy-by-design).
- In-use ethics: Interface features that maintain ethical standards during operation (explanations, opt-out mechanisms, appeal processes).
- Post-deployment ethics: Monitoring and audit systems that detect ethical violations after deployment (bias drift detection, impact assessment).
Ethics vs. Regulation
Obafemi-Ajayi, Bright, and Wong (2025) focus specifically on biomedical AI, where the stakes of algorithmic decision-making include patient safety and health equity. Their analysis compares ethical frameworks (which are typically voluntary and principle-based) with regulatory frameworks (which are mandatory and rule-based) and argues that neither alone is sufficient.
Ethical frameworks provide flexibility and can adapt to novel situations, but they lack enforcement mechanisms. Regulatory frameworks provide enforcement but are slow to adapt and may not anticipate the specific challenges of AI in biomedicine. The authors propose a convergence model: regulation should set minimum standards, while ethics should guide behavior beyond the regulatory floor.
The paper is particularly relevant for international contexts: the EU AI Act, the US FDA's framework for AI-enabled medical devices, and China's algorithmic governance regulations take different approaches. Obafemi-Ajayi et al. argue that harmonization around shared safety principles, with jurisdiction-specific implementation details, is the pragmatic path forward.
Digital Citizenship and Youth
Shouli, Barthwal, and Kriuk (2026), with 5 citations, shift the focus to a particularly vulnerable population: young people navigating AI-driven digital platforms. Their analysis documents how AI-driven personalization in children's platforms operates without clear ethical boundaries, creating privacy risks that young users cannot meaningfully consent to.
The paper frames the issue in terms of digital citizenship: if citizenship implies rights (including privacy and autonomy), then the current design of AI-driven platforms effectively denies these rights to young users. Children's data is collected, profiled, and used for personalization without genuine informed consent—not because designers are malicious, but because the business models of digital platforms depend on data collection, and existing privacy governance frameworks do not adequately address children's specific needs.
Critical Analysis: Claims and Evidence
<
| Claim | Evidence | Verdict |
|---|
| Algorithmic classification constructs social categories, not just reflects them | Alyousef & Omari's philosophical analysis with case studies | ✅ Supported — consistent with sociology of classification |
| HCI provides the bridge between AI ethics and implementation | Kriuk's integrated framework | ⚠️ Uncertain — conceptually sound, empirically untested |
| Neither ethics nor regulation alone is sufficient for trustworthy AI | Obafemi-Ajayi et al.'s comparative analysis | ✅ Supported — both have complementary strengths and weaknesses |
| Young digital citizens lack adequate privacy governance | Shouli et al.'s policy analysis | ✅ Supported — regulatory gaps are well-documented |
Open Questions and Future Directions
Non-Western philosophical perspectives: The current literature draws primarily on Western philosophical traditions. How do Confucian, Ubuntu, Buddhist, or Islamic ethical frameworks approach algorithmic identity?The formalizability of ethics: Kriuk's framework assumes that ethical principles can be translated into design features. But can concepts like dignity, autonomy, or meaningful consent be formalized without distortion?Collective vs. individual harm: Algorithmic bias often affects groups rather than individuals. How do we conceptualize collective harm in ethical and legal frameworks designed for individual rights?The speed mismatch: AI systems evolve rapidly; philosophical analysis and regulatory processes move slowly. How do we build governance systems that can keep pace with technological change?Empirical grounding: Much of this philosophical literature makes claims about the human impact of algorithmic systems that remain empirically undertested. What psychological and social mechanisms mediate the relationship between algorithmic classification and identity?What This Means for Your Research
For AI ethicists, the move from bias correction to identity construction represents an expansion of the field's scope. The technical problem of debiasing is necessary but philosophically shallow; the deeper questions require engagement with traditions of thought about identity, autonomy, and power.
For designers and engineers, Kriuk's framework offers a practical pathway: translate ethical principles into pre-deployment, in-use, and post-deployment design features, rather than treating ethics as a standalone review process.
Explore related work through ORAA ResearchBrain.
When a hiring algorithm filters résumés, a credit scoring system assigns risk levels, or a social media platform curates a news feed, the algorithm is not merely processing data—it is making decisions that affect real human lives. The technical response to the problems this creates is debiasing: adjust the training data, add fairness constraints, audit the outputs. But a growing body of philosophical work argues that the technical response, while necessary, is insufficient. The deeper questions are about what it means for human identity and autonomy when algorithmic systems increasingly determine the categories people are sorted into.
The Research Landscape: From Bias to Identity
Alyousef and Omari (2025), with 2 citations, provide an interdisciplinary examination of algorithmic bias that moves beyond the technical framing. Their paper brings together data science, philosophy, and cultural studies to argue that algorithmic bias is not simply a technical error to be corrected but a philosophical problem about the relationship between data and identity.
Their core argument: when algorithms classify people (as creditworthy or not, employable or not, healthy or at-risk), these classifications do not merely reflect pre-existing categories—they create new ones. A credit score is not a measurement of some objective property called "creditworthiness"; it is a construction that brings a social category into existence. Once the category exists, it constrains real possibilities: people denied credit cannot build credit history, reinforcing their classification.
This constructivist insight connects algorithmic bias to broader philosophical traditions. In posthuman philosophy (drawing on Braidotti, Haraway, and Barad), the boundaries between human and technological are understood as fluid and mutually constitutive. Alyousef and Omari extend this framework to argue that algorithmic classification represents a new form of this human-technology entanglement—one where the technology does not augment human capabilities but defines human categories.
Human-Centered AI: From Principle to Practice
Kriuk (2026) addresses the gap between AI ethics principles and their practical implementation through the lens of Human-Computer Interaction (HCI). The paper synthesizes contemporary research to propose an integrated framework where ethical principles are operationalized through design mechanisms.
The key insight is that ethical principles (fairness, transparency, accountability) are meaningless unless they are translated into specific design features: interfaces that explain decisions, feedback mechanisms that allow users to contest classifications, and audit systems that detect disparate impact. Kriuk argues that HCI provides the necessary bridge between abstract ethics and concrete design.
The framework distinguishes between:
- Pre-deployment ethics: Design choices that embed ethical constraints before the system is used (fairness constraints in training, privacy-by-design).
- In-use ethics: Interface features that maintain ethical standards during operation (explanations, opt-out mechanisms, appeal processes).
- Post-deployment ethics: Monitoring and audit systems that detect ethical violations after deployment (bias drift detection, impact assessment).
Ethics vs. Regulation
Obafemi-Ajayi, Bright, and Wong (2025) focus specifically on biomedical AI, where the stakes of algorithmic decision-making include patient safety and health equity. Their analysis compares ethical frameworks (which are typically voluntary and principle-based) with regulatory frameworks (which are mandatory and rule-based) and argues that neither alone is sufficient.
Ethical frameworks provide flexibility and can adapt to novel situations, but they lack enforcement mechanisms. Regulatory frameworks provide enforcement but are slow to adapt and may not anticipate the specific challenges of AI in biomedicine. The authors propose a convergence model: regulation should set minimum standards, while ethics should guide behavior beyond the regulatory floor.
The paper is particularly relevant for international contexts: the EU AI Act, the US FDA's framework for AI-enabled medical devices, and China's algorithmic governance regulations take different approaches. Obafemi-Ajayi et al. argue that harmonization around shared safety principles, with jurisdiction-specific implementation details, is the pragmatic path forward.
Digital Citizenship and Youth
Shouli, Barthwal, and Kriuk (2026), with 5 citations, shift the focus to a particularly vulnerable population: young people navigating AI-driven digital platforms. Their analysis documents how AI-driven personalization in children's platforms operates without clear ethical boundaries, creating privacy risks that young users cannot meaningfully consent to.
The paper frames the issue in terms of digital citizenship: if citizenship implies rights (including privacy and autonomy), then the current design of AI-driven platforms effectively denies these rights to young users. Children's data is collected, profiled, and used for personalization without genuine informed consent—not because designers are malicious, but because the business models of digital platforms depend on data collection, and existing privacy governance frameworks do not adequately address children's specific needs.
Critical Analysis: Claims and Evidence
<
| Claim | Evidence | Verdict |
|---|
| Algorithmic classification constructs social categories, not just reflects them | Alyousef & Omari's philosophical analysis with case studies | ✅ Supported — consistent with sociology of classification |
| HCI provides the bridge between AI ethics and implementation | Kriuk's integrated framework | ⚠️ Uncertain — conceptually sound, empirically untested |
| Neither ethics nor regulation alone is sufficient for trustworthy AI | Obafemi-Ajayi et al.'s comparative analysis | ✅ Supported — both have complementary strengths and weaknesses |
| Young digital citizens lack adequate privacy governance | Shouli et al.'s policy analysis | ✅ Supported — regulatory gaps are well-documented |
Open Questions and Future Directions
Non-Western philosophical perspectives: The current literature draws primarily on Western philosophical traditions. How do Confucian, Ubuntu, Buddhist, or Islamic ethical frameworks approach algorithmic identity?The formalizability of ethics: Kriuk's framework assumes that ethical principles can be translated into design features. But can concepts like dignity, autonomy, or meaningful consent be formalized without distortion?Collective vs. individual harm: Algorithmic bias often affects groups rather than individuals. How do we conceptualize collective harm in ethical and legal frameworks designed for individual rights?The speed mismatch: AI systems evolve rapidly; philosophical analysis and regulatory processes move slowly. How do we build governance systems that can keep pace with technological change?Empirical grounding: Much of this philosophical literature makes claims about the human impact of algorithmic systems that remain empirically undertested. What psychological and social mechanisms mediate the relationship between algorithmic classification and identity?What This Means for Your Research
For AI ethicists, the move from bias correction to identity construction represents an expansion of the field's scope. The technical problem of debiasing is necessary but philosophically shallow; the deeper questions require engagement with traditions of thought about identity, autonomy, and power.
For designers and engineers, Kriuk's framework offers a practical pathway: translate ethical principles into pre-deployment, in-use, and post-deployment design features, rather than treating ethics as a standalone review process.
Explore related work through ORAA ResearchBrain.
References (5)
[1] Alyousef, A. & Omari, A. (2025). Ethically Aligned Artificial Intelligence: Investigating Algorithmic Bias, Human Identity, and Posthuman Ethics through a Data-Driven Philosophical Lens. Journal of Philosophy, 5(5).
[2] Kriuk, B. (2026). Integrating AI Ethics and Human–Computer Interaction: Toward Responsible and Human-Centered Intelligent Systems. Journal of Interdisciplinary Social, Digital, Creative & Engineering Studies.
[3] Obafemi-Ajayi, T., Bright, T., & Wong, E.F. (2025). Ethics vs. Regulation: Converging Frameworks for Trustworthy Human-Centered AI in Biomedical Research. Proc. IJCNN 2025, IEEE.
[4] Shouli, A., Barthwal, A., & Campbell, M. (2025). Ethical AI for Young Digital Citizens: A Call to Action on Privacy Governance. Security and Privacy, 8, e70202.
Obafemi-Ajayi, T., Bright, T. J., Wong, E. F., Wunsch, D., Peckham, J., & Moore, J. H. (2025). Ethics vs. Regulation: Converging Frameworks for Trustworthy Human-Centered AI in Biomedical Research. 2025 International Joint Conference on Neural Networks (IJCNN), 1-7.