The Times Australia
The Times World News

.
The Times Real Estate

.

When self-driving cars crash, who's responsible? Courts and insurers need to know what's inside the 'black box'

  • Written by Aaron J. Snoswell, Post-doctoral Research Fellow, Computational Law & AI Accountability, Queensland University of Technology
When self-driving cars crash, who's responsible? Courts and insurers need to know what's inside the 'black box'

The first serious accident involving a self-driving car in Australia occurred in March this year. A pedestrian suffered life-threatening injuries when hit by a Tesla Model 3[1] in “autopilot” mode.

In the US, the highway safety regulator is investigating a series of accidents[2] where Teslas on autopilot crashed into first-responder vehicles with flashing lights during traffic stops.

A highway car crash at night with emergency lights flashing
A Tesla model 3 collides with a stationary emergency responder vehicle in the US. NBC / YouTube[3]

The decision-making processes of “self-driving” cars are often opaque and unpredictable[4] (even to their manufacturers), so it can be hard to determine who should be held accountable for incidents such as these. However, the growing field of “explainable AI” may help provide some answers.

Read more: Who (or what) is behind the wheel? The regulatory challenges of driverless cars[5]

Who is responsible when self-driving cars crash?

While self-driving cars are new, they are still machines made and sold by manufacturers. When they cause harm, we should ask whether the manufacturer (or software developer) has met their safety responsibilities.

Modern negligence law comes from the famous case of Donoghue v Stevenson[6], where a woman discovered a decomposing snail in her bottle of ginger beer. The manufacturer was found negligent, not because he was expected to directly predict or control the behaviour of snails, but because his bottling process was unsafe.

By this logic, manufacturers and developers of AI-based systems like self-driving cars may not be able to foresee and control everything the “autonomous” system does, but they can take measures to reduce risks. If their risk management, testing, audits and monitoring practices are not good enough, they should be held accountable.

How much risk management is enough?

The difficult question will be “How much care and how much risk management is enough?” In complex software, it is impossible to test for every bug[7] in advance. How will developers and manufacturers know when to stop?

Fortunately, courts, regulators and technical standards bodies have experience in setting standards of care and responsibility for risky but useful activities.

Standards could be very exacting, like the European Union’s draft AI regulation[8], which requires risks to be reduced “as far as possible” without regard to cost. Or they may be more like Australian negligence law, which permits less stringent management for less likely or less severe risks, or where risk management would reduce the overall benefit of the risky activity.

Legal cases will be complicated by AI opacity

Once we have a clear standard for risks, we need a way to enforce it. One approach could be to give a regulator powers to impose penalties (as the ACCC does in competition cases, for example).

Individuals harmed by AI systems must also be able to sue. In cases involving self-driving cars, lawsuits against manufacturers will be particularly important.

However, for such lawsuits to be effective, courts will need to understand in detail the processes and technical parameters of the AI systems.

Manufacturers often prefer not to reveal such details for commercial reasons. But courts already have procedures to balance commercial interests with an appropriate amount of disclosure to facilitate litigation.

A greater challenge may arise when AI systems themselves are opaque “black boxes[9]”. For example, Tesla’s autopilot functionality relies on “deep neural networks[10]”, a popular type of AI system in which even the developers can never be entirely sure how or why it arrives at a given outcome.

‘Explainable AI’ to the rescue?

Opening the black box of modern AI systems is the focus of a new[11] wave[12] of computer science and humanities scholars[13]: the so-called “explainable AI” movement.

The goal is to help developers and end users understand how AI systems make decisions, either by changing how the systems are built or by generating explanations after the fact.

In a classic example[14], an AI system mistakenly classifies a picture of a husky as a wolf. An “explainable AI” method reveals the system focused on snow in the background of the image, rather than the animal in the foreground.

(Right) An image of a husky in front of a snowy background. (Left) An 'explainable AI' method shows which parts of the image the AI system focused on when classifying the image as a wolf.
Explainable AI in action: an AI system incorrectly classifies the husky on the left as a ‘wolf’, and at right we see this is because the system was focusing on the snow in the background of the image. Ribeiro, Singh & Guestrin[15]

How this might be used in a lawsuit will depend on various factors, including the specific AI technology and the harm caused. A key concern will be how much access the injured party is given to the AI system.

The Trivago case

Our new research[16] analysing an important recent Australian court case provides an encouraging glimpse of what this could look like.

In April 2022, the Federal Court penalised global hotel booking company Trivago $44.7 million for misleading customers about hotel room rates on its website and in TV advertising, after a case brought on by competition watchdog the ACCC[17]. A critical question was how Trivago’s complex ranking algorithm chose the top ranked offer for hotel rooms.

The Federal Court set up rules for evidence discovery with safeguards to protect Trivago’s intellectual property, and both the ACCC and Trivago called expert witnesses to provide evidence explaining how Trivago’s AI system worked.

Even without full access to Trivago’s system, the ACCC’s expert witness was able to produce compelling evidence that the system’s behaviour was not consistent with Trivago’s claim of giving customers the “best price”.

This shows how technical experts and lawyers together can overcome AI opacity in court cases. However, the process requires close collaboration and deep technical expertise, and will likely be expensive.

Regulators can take steps now to streamline things in the future, such as requiring AI companies to adequately document their systems.

The road ahead

Vehicles with various degrees of automation[18] are becoming more common, and fully autonomous taxis and buses are being tested both in Australia[19] and overseas[20].

Keeping our roads as safe as possible will require close collaboration between AI and legal experts, and regulators, manufacturers, insurers, and users will all have roles to play.

Read more: 'Self-driving' cars are still a long way off. Here are three reasons why[21]

References

  1. ^ hit by a Tesla Model 3 (www.9news.com.au)
  2. ^ series of accidents (www.skynettoday.com)
  3. ^ NBC / YouTube (www.youtube.com)
  4. ^ opaque and unpredictable (journals.sagepub.com)
  5. ^ Who (or what) is behind the wheel? The regulatory challenges of driverless cars (theconversation.com)
  6. ^ Donoghue v Stevenson (legalheritage.sclqld.org.au)
  7. ^ impossible to test for every bug (jolt.law.harvard.edu)
  8. ^ draft AI regulation (op.europa.eu)
  9. ^ black boxes (doi.org)
  10. ^ deep neural networks (www.louisbouchard.ai)
  11. ^ new (facctconference.org)
  12. ^ wave (eaamo.org)
  13. ^ scholars (www.aies-conference.com)
  14. ^ a classic example (dl.acm.org)
  15. ^ Ribeiro, Singh & Guestrin (dx.doi.org)
  16. ^ new research (aaronsnoswell.github.io)
  17. ^ competition watchdog the ACCC (www.accc.gov.au)
  18. ^ various degrees of automation (theconversation.com)
  19. ^ in Australia (news.redland.qld.gov.au)
  20. ^ overseas (electrek.co)
  21. ^ 'Self-driving' cars are still a long way off. Here are three reasons why (theconversation.com)

Read more https://theconversation.com/when-self-driving-cars-crash-whos-responsible-courts-and-insurers-need-to-know-whats-inside-the-black-box-180334

The Times Features

Property Hotspots for Australia in 2025

Introduction As we move into a new era of property investment, understanding the concept of property hotspots becomes essential for investors looking to maximize their returns. ...

INTRO Travel Predicts 2025’s Top Travel Hotspots

They’re Giving Away a Free Trip! As young travellers look ahead to 2025, certain destinations are already emerging as must-visit hotspots. According to INTRO Travel—an Austral...

Vitamin B6 is essential – but too much can be toxic. Here’s what to know to stay safe

In recent weeks, reports have been circulating[1] about severe reactions in people who’ve taken over-the-counter vitamin B6 supplements. Vitamin B6 poisoning can injure nerv...

The Benefits of Solar-Powered Heating and Cooling

As the climate becomes more unpredictable and temperatures continue to rise, staying cool during the hotter months is more important than ever. Traditional air conditioners, wh...

Cool T-Shirts for Men: Trends, Styles, and Must-Haves

People are fond of cool t-shirts for men. These allow the boys to show their personality. Yes, it works like a canvas that men use to do self-expression. Trendy men’s t-shirts ma...

Fresh Ideas for Celebrating the Year of the Snake

The Lunar New Year is here, and with it comes the Year of the Snake—a time for fresh beginnings, family connections, and, of course, delicious food. As celebrations kick off, A...

Times Magazine

What to Look for When Booking an Event Space in Melbourne

Define your event needs early to streamline venue selection and ensure a good fit. Choose a well-located, accessible venue with good transport links and parking. Check for key amenities such as catering, AV equipment, and flexible seating. Pla...

How BIM Software is Transforming Architecture and Engineering

Building Information Modeling (BIM) software has become a cornerstone of modern architecture and engineering practices, revolutionizing how professionals design, collaborate, and execute projects. By enabling more efficient workflows and fostering ...

How 32-Inch Computer Monitors Can Increase Your Workflow

With the near-constant usage of technology around the world today, ergonomics have become crucial in business. Moving to 32 inch computer monitors is perhaps one of the best and most valuable improvements you can possibly implement. This-sized moni...

Top Tips for Finding a Great Florist for Your Sydney Wedding

While the choice of wedding venue does much of the heavy lifting when it comes to wowing guests, decorations are certainly not far behind. They can add a bit of personality and flair to the traditional proceedings, as well as enhancing the venue’s ...

Avant Stone's 2025 Nature's Palette Collection

Avant Stone, a longstanding supplier of quality natural stone in Sydney, introduces the 2025 Nature’s Palette Collection. Curated for architects, designers, and homeowners with discerning tastes, this selection highlights classic and contemporary a...

Professional-Grade Tactical Gear: Why 5.11 Tactical Leads the Field

When you're out in the field, your gear has to perform at the same level as you. In the world of high-quality equipment, 5.11 Tactical has established itself as a standard for professionals who demand dependability. Regardless of whether you’re inv...

LayBy Shopping