Critical ReviewPhilosophy & Ethics

Engineering Ethics in the AI Age: Distributing Moral Responsibility

When an autonomous system causes harm, who bears moral responsibility? Recent work in engineering ethics argues the question itself is misframed—responsibility in AI systems must be distributed across roles, not assigned to individuals.

By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

When an autonomous vehicle strikes a pedestrian, a medical AI misdiagnoses a patient, or an algorithmic trading system crashes a market, the question of who bears moral responsibility becomes urgent and difficult. Traditional engineering ethics assumes that responsibility can be traced to identifiable individuals—the designer, the operator, the manufacturer. But AI systems complicate this assumption: the causal chain from design decision to harmful outcome passes through so many hands, and through a learning process that no individual fully controls, that individual attribution often fails.

The Research Landscape

Hwang (2024), with 3 citations, provides a structured analysis of the moral accountability challenge. The paper examines autonomous systems across healthcare, finance, transportation, and military domains, identifying a common pattern: as autonomy increases, the causal distance between human decisions and system outcomes grows, making traditional accountability frameworks increasingly inadequate.

The paper identifies three specific gaps:

The knowledge gap. Designers cannot fully predict how a trained model will behave in novel situations. The model's behavior is an emergent property of its training data and architecture—not something any individual decided.

The control gap. Once deployed, autonomous systems make decisions that no individual authorized. The driver of a conventional car decides to brake; an autonomous vehicle's braking decision is made by software that no human directly controls in real time.

The temporal gap. The design decisions that determine a system's behavior (training data selection, architecture choices, deployment conditions) may be separated from the harmful outcome by months or years, making the causal connection difficult to establish.

Distributed Responsibility

Kumar, Suthar, and Hwang (2024) propose a conceptual framework for distributing ethical responsibility across roles in hybrid human-AI systems. Rather than asking "who is responsible?" (which implies a single answer), they ask "how is responsibility distributed?" (which allows for multiple, overlapping responsibilities).

Their framework identifies several responsibility dimensions:

  • Design responsibility: Falls on those who made architectural choices, selected training data, and defined system objectives.
  • Deployment responsibility: Falls on those who decided where, when, and how to deploy the system.
  • Oversight responsibility: Falls on those tasked with monitoring the system's behavior and intervening when necessary.
  • Governance responsibility: Falls on institutional leaders who established (or failed to establish) the policies governing the system.
The framework is evaluated through a diagnostic model that assesses how well responsibility is distributed in specific high-stakes domains. The key finding is that in most current AI deployments, design and deployment responsibilities are well-defined but oversight and governance responsibilities are vague or absent—creating accountability gaps precisely where they matter most.

Responsible AI by Design

Dignum (2025) argues that ethical governance must be embedded in the entire lifecycle of AI development, not applied as an afterthought. Her work on responsible AI and autonomous agents distinguishes between:

  • Ethics in AI: Ethical constraints built into the system (fairness algorithms, safety bounds).
  • Ethics of AI: Broader societal questions about whether and how AI should be used.
  • Ethics by AI: The possibility that AI agents might themselves make ethical judgments.
For engineering practice, the most actionable contribution is the emphasis on proactive rather than reactive responsibility. Rather than asking who is to blame after something goes wrong, the framework asks what structures must be in place to prevent harm—and who is responsible for establishing those structures.

Future Directions in Responsibility Attribution

Tariq and Ahmed (2025) survey the landscape of responsibility reasoning across legal, ethical, and technical frameworks for autonomous systems. Their contribution is to map the gaps: where existing frameworks provide clear guidance and where they do not.

The clearest guidance exists for systems with a well-defined human operator (driver assistance systems, clinical decision support tools). Here, the human retains ultimate responsibility for the system's outputs. The least clear guidance exists for fully autonomous systems operating in open environments (autonomous drones, algorithmic trading systems), where no human is "in the loop" at the time of the decision.

The authors argue that the field needs to develop new concepts—not just refine existing ones. "Responsibility" as traditionally understood may not be the right framework for fully autonomous systems. Alternative concepts such as "answerability" (requiring explanation rather than blame) and "traceability" (maintaining audit trails without assigning individual fault) may be more productive.

Critical Analysis: Claims and Evidence

<
ClaimEvidenceVerdict
Traditional accountability frameworks are inadequate for AIHwang's analysis of knowledge/control/temporal gaps✅ Supported — the gaps are well-documented across domains
Responsibility should be distributed across roles, not assigned to individualsKumar et al.'s multi-dimensional framework✅ Supported — conceptually sound; practical implementation unclear
Oversight and governance responsibilities are systematically underspecifiedKumar et al.'s diagnostic model✅ Supported
New concepts beyond "responsibility" may be neededTariq & Ahmed's survey of framework gaps⚠️ Uncertain — promising direction but not yet developed

Open Questions and Future Directions

  • Legal implementation: How do distributed responsibility frameworks translate into legal liability? Current tort law typically requires identifying a responsible party.
  • The automation level question: Responsibility distribution likely varies with the degree of autonomy. How should frameworks adapt as systems move from human-in-the-loop to human-on-the-loop to human-out-of-the-loop?
  • Organizational incentives: If responsibility is distributed, is there a risk that it becomes diffuse—that everyone is partially responsible and therefore no one acts? How do we prevent this?
  • Cross-cultural variation: Concepts of responsibility vary across cultures. A framework developed in a Western individualist tradition may not transfer to contexts where collective responsibility is the norm.
  • Engineering education: Current engineering curricula treat ethics as a separate course. How should ethics be integrated into technical training so that responsibility considerations become part of the design process itself?
  • What This Means for Your Research

    For engineers, Kumar et al.'s framework offers a practical tool: map the responsibility distribution in your system's lifecycle and identify where gaps exist—particularly in oversight and governance.

    For policymakers, the gap between well-specified design responsibility and poorly specified governance responsibility suggests where regulatory attention is needed.

    Explore related work through ORAA ResearchBrain.

    References (4)

    [1] Hwang, J.Y. (2024). Ethics of artificial intelligence: Examining moral accountability in autonomous decision-making systems. World Journal of Advanced Research and Reviews, 23(3).
    [2] Dignum, V. (2025). Responsible AI and Autonomous Agents: Governance, Ethics, and Sustainable Innovation. Proc. AAMAS 2025.
    [3] Kumar, D., Suthar, N., & Rodriguez, R.V. (2025). Distributing ethical responsibility in hybrid human–AI systems: a conceptual framework and evaluation model. Journal of Information, Communication and Ethics in Society.
    [4] Tariq, U. & Ahmed, I. (2025). Reasoning About Responsibility in Autonomous Systems: Navigating the Challenges and Charting Future Directions. Universal Theory Journal, 1(2).

    Explore this topic deeper

    Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

    Click to remove unwanted keywords

    Search 7 keywords →