Critical ReviewPhilosophy & Ethics
Dignity as Foundation: A Dignitarian Approach to AI Ethics
Current AI ethics frameworks offer principles but lack philosophical grounding. Recent work proposes human dignity as the foundational value—from Western dignitarian philosophy to African Ubuntu—arguing that without a clear normative anchor, ethical AI governance becomes a collection of aspirations without force.
By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.
The proliferation of AI ethics guidelines over the past five years has been impressive in quantity and disappointing in coherence. Over 170 published frameworks enumerate principles—fairness, transparency, accountability, beneficence—but few offer a unified philosophical foundation for why these principles matter or how to adjudicate when they conflict. A hiring algorithm can be transparent (you know how it works) but unfair (it discriminates). An autonomous vehicle can be safe (it minimizes total casualties) but unjust (it systematically protects occupants over pedestrians). Without a foundational value to anchor the principles, ethics becomes a menu of aspirations.
Recent work proposes human dignity as that anchor. The argument is not new in philosophy—Kant placed dignity at the center of moral theory—but its application to AI governance is gaining fresh attention from scholars working across Western, Islamic, and African philosophical traditions.
The Research Landscape
Grounding Normative Principles
Hassen (2025) provides the most systematic articulation of the dignitarian approach to AI ethics. His paper confronts the "foundation deficit" directly: current AI ethics documents, he argues, present principles without justifying their normative force. Why should an AI system be fair? Because fairness is instrumentally valuable (it avoids lawsuits)? Because it is intrinsically valuable? The dignitarian answer is that fairness matters because every person has inherent worth that cannot be reduced to a data point, a probability, or a utility calculation.
The framework derives specific implications from the dignity principle:
Non-instrumentalization. AI systems must not treat persons merely as means to an end. An algorithm that profiles users to maximize advertising revenue treats persons as means. This does not necessarily prohibit profiling, but it requires that the person's interests are also served—not just the platform's.
Equal moral status. Every person counts equally in the moral calculus. An AI system that systematically produces worse outcomes for one demographic group violates equal moral status, regardless of whether the discrimination is intentional.
Autonomy preservation. AI systems that make decisions affecting individuals must preserve the individual's capacity for self-determination. This means, at minimum, the right to understand the decision, contest it, and opt out.
Dignity Against AI Domination
Cruz (2025), with 1 citation, extends the analysis to international law, examining how the principle of human dignity—traditionally the cornerstone of human rights law—applies to challenges posed by autonomous weapons, algorithmic discrimination, and mass surveillance.
The legal analysis reveals a gap: human rights instruments (the UDHR, ICCPR, ECHR) protect dignity against state action, but many AI-related threats to dignity come from private actors (technology companies) operating across jurisdictions. The existing legal framework was not designed for this configuration, and Cruz argues that new instruments are needed—not to replace human rights law but to extend its reach to private algorithmic governance.
Algorithmic Bias and Value Alignment
Zhao and Ren (2025), with 3 citations, bring the dignity framework into direct contact with the technical AI alignment literature. Their paper argues that the "alignment problem"—ensuring AI systems act in accordance with human values—should be grounded in dignity rather than in preference satisfaction or utility maximization.
The distinction matters practically. A utility-maximizing alignment approach might accept outcomes that benefit the majority at the expense of a minority (the classic utilitarian trade-off). A dignity-based alignment approach would constrain this: no outcome that violates any individual's dignity is acceptable, regardless of aggregate benefits.
The authors propose a governance framework that embeds dignity constraints at multiple levels:
- Data level: Training data must be collected and used in ways that respect the dignity of data subjects.
- Model level: Algorithmic decisions must be auditable for dignity violations (systematic discrimination, dehumanizing classifications).
- Deployment level: Systems must include mechanisms for individuals to understand, challenge, and override algorithmic decisions that affect them.
Ubuntu as Alternative Foundation
Akpah (2026) challenges the Western-centrism of dignity-based frameworks by proposing Ubuntu—the Southern African philosophy emphasizing community, interconnectedness, and mutual care—as an alternative foundation for AI governance.
The Ubuntu perspective reframes several issues:
Individual vs. relational autonomy. Western dignity frameworks emphasize individual autonomy. Ubuntu emphasizes relational autonomy—the idea that persons realize their dignity through relationships with others, not in isolation. For AI governance, this means that the impact of algorithmic decisions should be assessed not just on individuals but on communities and relationships.
Rights vs. responsibilities. Western frameworks are rights-based: individuals have rights that AI systems must respect. Ubuntu is more responsibility-oriented: developers, deployers, and users all have responsibilities to the community. The emphasis shifts from "what can I claim?" to "what do I owe?"
Competition vs. cooperation. The Western approach to AI governance often frames it as a competition (companies competing to be the most "ethical," nations competing to have the best regulation). Ubuntu frames governance as a cooperative endeavor in which all stakeholders share responsibility for outcomes.
Critical Analysis: Claims and Evidence
<
| Claim | Evidence | Verdict |
|---|
| Current AI ethics frameworks lack philosophical grounding | Hassen's survey of 170+ frameworks | ✅ Supported — principle lists without unified justification |
| Human dignity can serve as a unified foundation for AI ethics | Hassen's dignitarian derivation of specific principles | ⚠️ Uncertain — philosophically coherent but practically untested |
| International human rights law is inadequately equipped for private AI governance | Cruz's legal analysis | ✅ Supported — gap between state-focused law and private-sector AI |
| Dignity-based alignment constrains utilitarianism | Zhao & Ren's governance framework | ✅ Supported — clear theoretical distinction with practical implications |
| Ubuntu provides a viable non-Western foundation for AI governance | Akpah's philosophical analysis | ⚠️ Uncertain — compelling alternative but institutional mechanisms unclear |
Open Questions and Future Directions
Operationalization: How do you translate "dignity" into code? Fairness constraints can be formalized (demographic parity, equalized odds). Can dignity constraints?Cross-cultural convergence: Do Western dignity, Ubuntu, Islamic karamah, and Confucian ren converge on similar AI governance principles? If so, this suggests a stronger foundation than any single tradition.Institutional design: Dignity-based governance requires institutions that can detect and remedy dignity violations. What do these institutions look like? Algorithmic ombudsmen? Digital rights courts?AGI implications: As AI systems become more capable, the dignity question intensifies. If an AI system can simulate conversation indistinguishably from a human, does it acquire dignity? Or does it merely simulate it?Power asymmetries: Dignity frameworks assume moral equality, but AI governance occurs in contexts of radical power asymmetry between technology companies and individuals. Can dignity survive this asymmetry?What This Means for Your Research
For AI ethicists, the dignitarian approach offers a way to move from principle lists to principled reasoning—grounding specific ethical requirements in a unified normative foundation.
For policymakers, the Ubuntu perspective is a reminder that AI governance frameworks developed in Western contexts may not translate directly to other cultural settings. Inclusive governance requires inclusive philosophy.
Explore related work through ORAA ResearchBrain.
The proliferation of AI ethics guidelines over the past five years has been impressive in quantity and disappointing in coherence. Over 170 published frameworks enumerate principles—fairness, transparency, accountability, beneficence—but few offer a unified philosophical foundation for why these principles matter or how to adjudicate when they conflict. A hiring algorithm can be transparent (you know how it works) but unfair (it discriminates). An autonomous vehicle can be safe (it minimizes total casualties) but unjust (it systematically protects occupants over pedestrians). Without a foundational value to anchor the principles, ethics becomes a menu of aspirations.
Recent work proposes human dignity as that anchor. The argument is not new in philosophy—Kant placed dignity at the center of moral theory—but its application to AI governance is gaining fresh attention from scholars working across Western, Islamic, and African philosophical traditions.
The Research Landscape
Grounding Normative Principles
Hassen (2025) provides the most systematic articulation of the dignitarian approach to AI ethics. His paper confronts the "foundation deficit" directly: current AI ethics documents, he argues, present principles without justifying their normative force. Why should an AI system be fair? Because fairness is instrumentally valuable (it avoids lawsuits)? Because it is intrinsically valuable? The dignitarian answer is that fairness matters because every person has inherent worth that cannot be reduced to a data point, a probability, or a utility calculation.
The framework derives specific implications from the dignity principle:
Non-instrumentalization. AI systems must not treat persons merely as means to an end. An algorithm that profiles users to maximize advertising revenue treats persons as means. This does not necessarily prohibit profiling, but it requires that the person's interests are also served—not just the platform's.
Equal moral status. Every person counts equally in the moral calculus. An AI system that systematically produces worse outcomes for one demographic group violates equal moral status, regardless of whether the discrimination is intentional.
Autonomy preservation. AI systems that make decisions affecting individuals must preserve the individual's capacity for self-determination. This means, at minimum, the right to understand the decision, contest it, and opt out.
Dignity Against AI Domination
Cruz (2025), with 1 citation, extends the analysis to international law, examining how the principle of human dignity—traditionally the cornerstone of human rights law—applies to challenges posed by autonomous weapons, algorithmic discrimination, and mass surveillance.
The legal analysis reveals a gap: human rights instruments (the UDHR, ICCPR, ECHR) protect dignity against state action, but many AI-related threats to dignity come from private actors (technology companies) operating across jurisdictions. The existing legal framework was not designed for this configuration, and Cruz argues that new instruments are needed—not to replace human rights law but to extend its reach to private algorithmic governance.
Algorithmic Bias and Value Alignment
Zhao and Ren (2025), with 3 citations, bring the dignity framework into direct contact with the technical AI alignment literature. Their paper argues that the "alignment problem"—ensuring AI systems act in accordance with human values—should be grounded in dignity rather than in preference satisfaction or utility maximization.
The distinction matters practically. A utility-maximizing alignment approach might accept outcomes that benefit the majority at the expense of a minority (the classic utilitarian trade-off). A dignity-based alignment approach would constrain this: no outcome that violates any individual's dignity is acceptable, regardless of aggregate benefits.
The authors propose a governance framework that embeds dignity constraints at multiple levels:
- Data level: Training data must be collected and used in ways that respect the dignity of data subjects.
- Model level: Algorithmic decisions must be auditable for dignity violations (systematic discrimination, dehumanizing classifications).
- Deployment level: Systems must include mechanisms for individuals to understand, challenge, and override algorithmic decisions that affect them.
Ubuntu as Alternative Foundation
Akpah (2026) challenges the Western-centrism of dignity-based frameworks by proposing Ubuntu—the Southern African philosophy emphasizing community, interconnectedness, and mutual care—as an alternative foundation for AI governance.
The Ubuntu perspective reframes several issues:
Individual vs. relational autonomy. Western dignity frameworks emphasize individual autonomy. Ubuntu emphasizes relational autonomy—the idea that persons realize their dignity through relationships with others, not in isolation. For AI governance, this means that the impact of algorithmic decisions should be assessed not just on individuals but on communities and relationships.
Rights vs. responsibilities. Western frameworks are rights-based: individuals have rights that AI systems must respect. Ubuntu is more responsibility-oriented: developers, deployers, and users all have responsibilities to the community. The emphasis shifts from "what can I claim?" to "what do I owe?"
Competition vs. cooperation. The Western approach to AI governance often frames it as a competition (companies competing to be the most "ethical," nations competing to have the best regulation). Ubuntu frames governance as a cooperative endeavor in which all stakeholders share responsibility for outcomes.
Critical Analysis: Claims and Evidence
<
| Claim | Evidence | Verdict |
|---|
| Current AI ethics frameworks lack philosophical grounding | Hassen's survey of 170+ frameworks | ✅ Supported — principle lists without unified justification |
| Human dignity can serve as a unified foundation for AI ethics | Hassen's dignitarian derivation of specific principles | ⚠️ Uncertain — philosophically coherent but practically untested |
| International human rights law is inadequately equipped for private AI governance | Cruz's legal analysis | ✅ Supported — gap between state-focused law and private-sector AI |
| Dignity-based alignment constrains utilitarianism | Zhao & Ren's governance framework | ✅ Supported — clear theoretical distinction with practical implications |
| Ubuntu provides a viable non-Western foundation for AI governance | Akpah's philosophical analysis | ⚠️ Uncertain — compelling alternative but institutional mechanisms unclear |
Open Questions and Future Directions
Operationalization: How do you translate "dignity" into code? Fairness constraints can be formalized (demographic parity, equalized odds). Can dignity constraints?Cross-cultural convergence: Do Western dignity, Ubuntu, Islamic karamah, and Confucian ren converge on similar AI governance principles? If so, this suggests a stronger foundation than any single tradition.Institutional design: Dignity-based governance requires institutions that can detect and remedy dignity violations. What do these institutions look like? Algorithmic ombudsmen? Digital rights courts?AGI implications: As AI systems become more capable, the dignity question intensifies. If an AI system can simulate conversation indistinguishably from a human, does it acquire dignity? Or does it merely simulate it?Power asymmetries: Dignity frameworks assume moral equality, but AI governance occurs in contexts of radical power asymmetry between technology companies and individuals. Can dignity survive this asymmetry?What This Means for Your Research
For AI ethicists, the dignitarian approach offers a way to move from principle lists to principled reasoning—grounding specific ethical requirements in a unified normative foundation.
For policymakers, the Ubuntu perspective is a reminder that AI governance frameworks developed in Western contexts may not translate directly to other cultural settings. Inclusive governance requires inclusive philosophy.
Explore related work through ORAA ResearchBrain.
References (4)
[1] Hassen, M.Z. (2025). A dignitarian approach to AI ethics: grounding normative principles in human value. Artificial Intelligence, 2025, 434.
[2] Cruz, V. (2025). Human Dignity Against AI Domination: In Search of a Legal and Ethical Framework in the Age of Digitalization, Autonomous Warfare, and Algorithmic Discrimination. ICL Congress Proceedings.
[3] Zhao, Y. & Ren, Z. (2025). The Alignment of Values: Embedding Human Dignity in Algorithmic Bias Governance for the AGI Era. International Journal of Digital Law and Governance.
[4] Akpah, G.K. (2026). Ubuntu in Artificial Intelligence (AI) Governance: Towards an Inclusive and Democratic Technological Future.