Law & Policy

When the Car Decides: Autonomous Vehicle Liability and the Crisis of Tort Law

When an autonomous vehicle causes an accident, who is liableโ€”the manufacturer, the software developer, the owner, or the AI itself? Five papers reveal that existing tort law cannot answer this question, and that the emerging regulatory frameworks (Germany's AV Act, EU AI Act) are only partial solutions.

By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

Tort law is built on a simple principle: the person who caused the harm should compensate the victim. For automobile accidents, this has meant the driverโ€”the human being who made the decision that led to the collision. Negligence doctrine, product liability, and insurance systems all assume that a human decision-maker sits at the causal chain's origin.

Autonomous vehicles displace this assumption. When a Level 4 vehicleโ€”one that drives itself without human supervision in defined conditionsโ€”causes an accident, there is no human driver to hold negligent. The vehicle's behavior was determined by algorithms developed by software engineers, trained on data collected by sensor manufacturers, executing decisions within parameters set by the vehicle manufacturer, operating under conditions approved by regulators. The causal chain is distributed across multiple actors, none of whom individually "decided" to cause the accident.

This is not a hypothetical problem. Germany's 2021 Autonomous Driving Act permits Level 4 vehicles on public roads. Waymo operates driverless robotaxis in multiple US cities. The EU's 2024 AI Act classifies high-risk AI systemsโ€”including autonomous drivingโ€”under enhanced regulatory requirements. The accidents will come. The question is whether the law is ready.

The Hand Formula Problem

Tian (2025) examines one of tort law's foundational toolsโ€”the Hand Formulaโ€”and its applicability to autonomous vehicle accidents. The Hand Formula, developed by Judge Learned Hand in United States v. Carroll Towing Co. (1947), determines negligence by comparing the cost of prevention (B) with the probability of harm (P) multiplied by the magnitude of harm (L). If B < PL, the defendant should have taken preventive action and is negligent for failing to do so.

The paper argues that in human-machine collaborative driving, AI's opaque decision-making processes make the traditional Hand Formula calculation problematic. The "black box" nature of AI decisions and their complexity make accident cost-benefit analysisโ€”which the Hand Formula requiresโ€”difficult to apply. When the AI made a decision that led to an accident, we may not be able to determine what preventive action was available, what its cost would have been, or even what the AI "decided" in any meaningful sense.

Ethics to Law: The Philosophical Challenge

Huang (2025) addresses the complex issue of liability allocation from both legal and ethical perspectives. With the rapid development of autonomous driving technology, traditional legal frameworks struggle to cope with the new challenges posed by driverless vehicles.

The paper identifies several liability allocation models:

  • Driver/operator liability: The traditional model, which assumes a human being is monitoring the vehicle and can intervene. At Level 4 and above, this model becomes inapplicable because there is no driver.
  • Manufacturer/developer liability: The vehicle is a product, and the manufacturer is liable for defective products under strict liability doctrine. But autonomous driving software is not a traditional "product"โ€”it is an adaptive system that may behave differently from its tested version due to learning, updates, and environmental interactions.
  • Shared liability: Distributing liability among manufacturer, software developer, data provider, and owner based on each party's contribution to the accident scenario. This model is flexible but administratively complex.
  • No-fault insurance: Abandoning the liability question entirely and compensating victims through mandatory insurance pools funded by all AV stakeholders. This model prioritizes victim compensation over fault attribution.

Germany's Regulatory Response

Petrauskaitฤ— (2025) examines manufacturer liability for AV accidents in Germany through the Volkswagen case. The study is positioned at the intersection of product liability, contractual responsibility, and AI ethics, examining how Germany's legal framework handles accidents involving Volkswagen's "Travel Assist" AI-assisted driving system.

Kowalski (2025) complements this with a broader analysis of Germany's 2021 Autonomous Driving Act (Autonomes-Fahren-Gesetz), which is among the world's first comprehensive AV liability frameworks. The Act establishes a "technical supervisor" conceptโ€”a remote human operator who monitors the AV and can interveneโ€”and places primary liability on the vehicle manufacturer when the AV operates within its approved conditions.

The German model represents one resolution of the liability question: treating the AV manufacturer as analogous to a common carrier (like a train operator or airline), with strict liability for accidents during autonomous operation. This model provides clear rules for victim compensation but may create risk-aversion among manufacturers, potentially slowing deployment.

Identifying Liability Subjects in AI Systems

Hu (2025) examines the identification of tort liability subjects in AI systems, using autonomous vehicles as the primary case. As a significant achievement in AI development, autonomous vehicles promise substantial convenience and societal transformation. However, their integration introduces legal risks regarding the determination and allocation of liability.

The analysis identifies a conceptual problem: tort law requires identifying a "subject"โ€”a legally recognized entity capable of bearing rights and obligations. The AI system is not a legal subject; it is a tool. But it is a tool whose behavior is not fully determined by any single human actor. The liability gap exists in the space between the AI's autonomous behavior and the legal framework's requirement for human agency.

Several proposed solutions emerge from the literature:

  • Electronic personhood: Granting AI systems limited legal personality, analogous to corporate personhood. This has been proposed by the European Parliament but remains controversial.
  • Vicarious liability: Holding the manufacturer vicariously liable for the AI's "actions," analogous to employer liability for employee conduct. This stretches the analogy but provides a familiar framework.
  • Mandatory insurance: Requiring all AV stakeholders to contribute to an insurance pool that compensates victims regardless of fault. This is administratively efficient but does not address the deterrence function of tort law.

Claims and Evidence

<
ClaimEvidenceVerdict
The Hand Formula can determine AV negligenceTian (2025): AI opacity makes cost-benefit analysis infeasibleโŒ Refuted
Traditional tort law can handle AV accidentsAll papers: existing frameworks are inadequate for distributed AI causationโŒ Refuted
Germany's AV Act provides a viable liability modelKowalski (2025), Petrauskaitฤ— (2025): establishes clear rules but may chill innovationโš ๏ธ Uncertain
Strict manufacturer liability is appropriate for AV accidentsTheoretical support in multiple papers; may overburden manufacturers who cannot control all variablesโš ๏ธ Uncertain
Victims can be adequately compensated under current lawHu (2025): liability gaps may leave victims without clear recourseโŒ Refuted (without reform)

Open Questions

  • Should AV software updates change the liability calculus? If an AV accident was preventable with a software update that the manufacturer had developed but not yet deployed, is the manufacturer negligent for the delay?
  • How should courts handle the "trolley problem" in practice? If an AV's algorithm was designed to prioritize passenger safety over pedestrian safety (or vice versa), does this design choice create liability for the programmer who made it?
  • Can blockchain-based "black box" records provide the transparency that tort law requires? Immutable records of AV decision-making could enable after-the-fact analysis of what the AI "decided" and whyโ€”but privacy and proprietary concerns may limit access.
  • Will different jurisdictions converge on a liability model? Currently, Germany, the US, China, and Japan take different approaches. Will market pressure and international trade create convergence?
  • Implications

    The autonomous vehicle liability question is a test case for a broader legal challenge: how do legal systems designed for human agency adapt to AI agency? The solutions developed for AV liabilityโ€”whether manufacturer liability, no-fault insurance, or electronic personhoodโ€”will provide templates for liability frameworks in healthcare AI, judicial AI, financial AI, and other domains where AI decisions have consequential impacts on human lives.

    References (5)

    [1] Tian, X. (2025). How Should Autonomous Vehicles Allocate Accident Liability? Rethinking the Applicability of the Hand Formula. Asian Journal of Law and Economics.
    [2] Huang, H. (2025). Liability Allocation in Autonomous Vehicles: From Ethics to Law.
    [3] Petrauskaitฤ—, G. (2025). Manufacturer Liability for Autonomous Vehicle Accidents in Germany: Legal and Ethical Dimensions of the Volkswagen Case. ICL 2025 Congress.
    [4] Kowalski, M. (2025). Legal Liability for Autonomous Vehicle Accidents in Germany. ISL 2025 Symposium.
    [5] Hu, X. (2025). Identification of Tort Liability Subjects in Artificial Intelligence: The Case of Autonomous Vehicles. Modern Economics & Management Forum, 6(4), 4242.

    Explore this topic deeper

    Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

    Click to remove unwanted keywords

    Search 8 keywords โ†’