The Times Australia
The Times World News

.

When self-driving cars crash, who's responsible? Courts and insurers need to know what's inside the 'black box'

  • Written by Aaron J. Snoswell, Post-doctoral Research Fellow, Computational Law & AI Accountability, Queensland University of Technology
When self-driving cars crash, who's responsible? Courts and insurers need to know what's inside the 'black box'

The first serious accident involving a self-driving car in Australia occurred in March this year. A pedestrian suffered life-threatening injuries when hit by a Tesla Model 3[1] in “autopilot” mode.

In the US, the highway safety regulator is investigating a series of accidents[2] where Teslas on autopilot crashed into first-responder vehicles with flashing lights during traffic stops.

A highway car crash at night with emergency lights flashing
A Tesla model 3 collides with a stationary emergency responder vehicle in the US. NBC / YouTube[3]

The decision-making processes of “self-driving” cars are often opaque and unpredictable[4] (even to their manufacturers), so it can be hard to determine who should be held accountable for incidents such as these. However, the growing field of “explainable AI” may help provide some answers.

Read more: Who (or what) is behind the wheel? The regulatory challenges of driverless cars[5]

Who is responsible when self-driving cars crash?

While self-driving cars are new, they are still machines made and sold by manufacturers. When they cause harm, we should ask whether the manufacturer (or software developer) has met their safety responsibilities.

Modern negligence law comes from the famous case of Donoghue v Stevenson[6], where a woman discovered a decomposing snail in her bottle of ginger beer. The manufacturer was found negligent, not because he was expected to directly predict or control the behaviour of snails, but because his bottling process was unsafe.

By this logic, manufacturers and developers of AI-based systems like self-driving cars may not be able to foresee and control everything the “autonomous” system does, but they can take measures to reduce risks. If their risk management, testing, audits and monitoring practices are not good enough, they should be held accountable.

How much risk management is enough?

The difficult question will be “How much care and how much risk management is enough?” In complex software, it is impossible to test for every bug[7] in advance. How will developers and manufacturers know when to stop?

Fortunately, courts, regulators and technical standards bodies have experience in setting standards of care and responsibility for risky but useful activities.

Standards could be very exacting, like the European Union’s draft AI regulation[8], which requires risks to be reduced “as far as possible” without regard to cost. Or they may be more like Australian negligence law, which permits less stringent management for less likely or less severe risks, or where risk management would reduce the overall benefit of the risky activity.

Legal cases will be complicated by AI opacity

Once we have a clear standard for risks, we need a way to enforce it. One approach could be to give a regulator powers to impose penalties (as the ACCC does in competition cases, for example).

Individuals harmed by AI systems must also be able to sue. In cases involving self-driving cars, lawsuits against manufacturers will be particularly important.

However, for such lawsuits to be effective, courts will need to understand in detail the processes and technical parameters of the AI systems.

Manufacturers often prefer not to reveal such details for commercial reasons. But courts already have procedures to balance commercial interests with an appropriate amount of disclosure to facilitate litigation.

A greater challenge may arise when AI systems themselves are opaque “black boxes[9]”. For example, Tesla’s autopilot functionality relies on “deep neural networks[10]”, a popular type of AI system in which even the developers can never be entirely sure how or why it arrives at a given outcome.

‘Explainable AI’ to the rescue?

Opening the black box of modern AI systems is the focus of a new[11] wave[12] of computer science and humanities scholars[13]: the so-called “explainable AI” movement.

The goal is to help developers and end users understand how AI systems make decisions, either by changing how the systems are built or by generating explanations after the fact.

In a classic example[14], an AI system mistakenly classifies a picture of a husky as a wolf. An “explainable AI” method reveals the system focused on snow in the background of the image, rather than the animal in the foreground.

(Right) An image of a husky in front of a snowy background. (Left) An 'explainable AI' method shows which parts of the image the AI system focused on when classifying the image as a wolf.
Explainable AI in action: an AI system incorrectly classifies the husky on the left as a ‘wolf’, and at right we see this is because the system was focusing on the snow in the background of the image. Ribeiro, Singh & Guestrin[15]

How this might be used in a lawsuit will depend on various factors, including the specific AI technology and the harm caused. A key concern will be how much access the injured party is given to the AI system.

The Trivago case

Our new research[16] analysing an important recent Australian court case provides an encouraging glimpse of what this could look like.

In April 2022, the Federal Court penalised global hotel booking company Trivago $44.7 million for misleading customers about hotel room rates on its website and in TV advertising, after a case brought on by competition watchdog the ACCC[17]. A critical question was how Trivago’s complex ranking algorithm chose the top ranked offer for hotel rooms.

The Federal Court set up rules for evidence discovery with safeguards to protect Trivago’s intellectual property, and both the ACCC and Trivago called expert witnesses to provide evidence explaining how Trivago’s AI system worked.

Even without full access to Trivago’s system, the ACCC’s expert witness was able to produce compelling evidence that the system’s behaviour was not consistent with Trivago’s claim of giving customers the “best price”.

This shows how technical experts and lawyers together can overcome AI opacity in court cases. However, the process requires close collaboration and deep technical expertise, and will likely be expensive.

Regulators can take steps now to streamline things in the future, such as requiring AI companies to adequately document their systems.

The road ahead

Vehicles with various degrees of automation[18] are becoming more common, and fully autonomous taxis and buses are being tested both in Australia[19] and overseas[20].

Keeping our roads as safe as possible will require close collaboration between AI and legal experts, and regulators, manufacturers, insurers, and users will all have roles to play.

Read more: 'Self-driving' cars are still a long way off. Here are three reasons why[21]

References

  1. ^ hit by a Tesla Model 3 (www.9news.com.au)
  2. ^ series of accidents (www.skynettoday.com)
  3. ^ NBC / YouTube (www.youtube.com)
  4. ^ opaque and unpredictable (journals.sagepub.com)
  5. ^ Who (or what) is behind the wheel? The regulatory challenges of driverless cars (theconversation.com)
  6. ^ Donoghue v Stevenson (legalheritage.sclqld.org.au)
  7. ^ impossible to test for every bug (jolt.law.harvard.edu)
  8. ^ draft AI regulation (op.europa.eu)
  9. ^ black boxes (doi.org)
  10. ^ deep neural networks (www.louisbouchard.ai)
  11. ^ new (facctconference.org)
  12. ^ wave (eaamo.org)
  13. ^ scholars (www.aies-conference.com)
  14. ^ a classic example (dl.acm.org)
  15. ^ Ribeiro, Singh & Guestrin (dx.doi.org)
  16. ^ new research (aaronsnoswell.github.io)
  17. ^ competition watchdog the ACCC (www.accc.gov.au)
  18. ^ various degrees of automation (theconversation.com)
  19. ^ in Australia (news.redland.qld.gov.au)
  20. ^ overseas (electrek.co)
  21. ^ 'Self-driving' cars are still a long way off. Here are three reasons why (theconversation.com)

Read more https://theconversation.com/when-self-driving-cars-crash-whos-responsible-courts-and-insurers-need-to-know-whats-inside-the-black-box-180334

Times Magazine

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Decline of Hyper-Casual: How Mid-Core Mobile Games Took Over in 2025

In recent years, the mobile gaming landscape has undergone a significant transformation, with mid-core mobile games emerging as the dominant force in app stores by 2025. This shift is underpinned by changing user habits and evolving monetization tr...

Understanding ITIL 4 and PRINCE2 Project Management Synergy

Key Highlights ITIL 4 focuses on IT service management, emphasising continual improvement and value creation through modern digital transformation approaches. PRINCE2 project management supports systematic planning and execution of projects wit...

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

From Beach Bops to Alpine Anthems: Your Sonos Survival Guide for a Long Weekend Escape

Alright, fellow adventurers and relaxation enthusiasts! So, you've packed your bags, charged your devices, and mentally prepared for that glorious King's Birthday long weekend. But hold on, are you really ready? Because a true long weekend warrior kn...

Effective Commercial Pest Control Solutions for a Safer Workplace

Keeping a workplace clean, safe, and free from pests is essential for maintaining productivity, protecting employee health, and upholding a company's reputation. Pests pose health risks, can cause structural damage, and can lead to serious legal an...

The Times Features

What Endo Took and What It Gave Me

From pain to purpose: how one woman turned endometriosis into a movement After years of misdiagnosis, hormone chaos, and major surgery, Jo Barry was done being dismissed. What beg...

Why Parents Must Break the Silence on Money and Start Teaching Financial Skills at Home

Australia’s financial literacy rates are in decline, and our kids are paying the price. Certified Money Coach and Financial Educator Sandra McGuire, who has over 20 years’ exp...

Australia’s Grill’d Transforms Operations with Qlik

Boosting Burgers and Business Clean, connected data powers real-time insights, smarter staffing, and standout customer experiences Sydney, Australia, 14 July 2025 – Qlik®, a g...

Tricia Paoluccio designer to the stars

The Case for Nuturing Creativity in the Classroom, and in our Lives I am an actress and an artist who has had the privilege of sharing my work across many countries, touring my ...

Duke of Dural to Get Rooftop Bar as New Owners Invest in Venue Upgrade

The Duke of Dural, in Sydney’s north-west, is set for a major uplift under new ownership, following its acquisition by hospitality group Good Beer Company this week. Led by resp...

Prefab’s Second Life: Why Australia’s Backyard Boom Needs a Circular Makeover

The humble granny flat is being reimagined not just as a fix for housing shortages, but as a cornerstone of circular, factory-built architecture. But are our systems ready to s...