The Times Australia
Fisher and Paykel Appliances
Business and Money

The rise of the ‘machine defendant’ – who’s to blame when an AI makes mistakes?

  • Written by Michael Duffy, Associate Professor, Monash Business School, Director Corporate Law, Organisation and Litigation Research Group (CLOL), Monash University
The rise of the ‘machine defendant’ – who’s to blame when an AI makes mistakes?

Few industries remain untouched by the potential for transformation through artificial intelligence (AI) – or at least the hype.

For business, the technology’s promise goes far beyond writing emails. It’s already being used to automate a wide range of business processes and interactions, coach employees[1], and even help doctors[2] analyse medical data.

Competition between the various creators of AI models – including OpenAI, Meta, Anthropic and Google – will continue to drive rapid improvement.

We should expect these systems to get much smarter over time, which means we may begin to trust them with more and more responsibility.

The big question then becomes: what if something goes badly wrong? Who’s ultimately responsible for the decisions made by a machine?

My research[3] has examined this very problem. Worryingly, our current legal frameworks may not be up to scratch.

We seem to have avoided catastrophe – so far

As any technology advances, it is inevitable things will go wrong. We’ve already seen this with the internet, which has delivered enormous benefits to society but also created a host of new problems – such as social media addiction[4], data breaches[5] and the rise of cybercrime[6].

So far, we seem to have avoided a global internet catastrophe. Yet the CrowdStrike outage in July – which quickly brought businesses and many other services to a standstill – offered a timely reminder of just how reliant on technology we’ve become, and how quickly things can fall apart in such an interdependent web.

Passengers wait in line for assistance at an airport
The Crowdstrike outage in July hinted at just how vulnerable our tech-enabled global economy has become. Michael Dwyer/AP[7]

Read more: One small update brought down millions of IT systems around the world. It's a timely warning[8]

Like the early internet, generative AI also promises society immense benefits, but is likely to have some significant and unpredictable downsides.

There’s certainly been no shortage of warnings. At the extreme, some experts believe out-of-control AI could pose a “nuclear-level[9]” threat, and present a major existential risk[10] for humanity.

One of the most obvious risks is that “bad actors” – such as organised crime groups and rogue nation states – use the technology to deliberately cause harm. This could include using deepfakes and other misinformation to influence elections, or to conduct cybercrimes en masse. We’ve already seen examples[11] of such use.

Less dramatic, but still highly problematic, are the risks that arise when we entrust important tasks and responsibilities to AI, particularly in running businesses and other essential services. It’s certainly no stretch of the imagination to envisage a future global tech outage caused by computer code written and shipped entirely by AI.

When these AIs make autonomous decisions that inadvertently cause harm – whether financial loss or actual injury – whom do we hold liable?

Our laws aren’t prepared

Worryingly, our existing theories of legal liability may be ill-equipped for this new reality.

This is because apart from some product liability laws, current theories often require fault through an intention, or at least provable negligence by an individual.

Selective focus on programmer typing code on computer keyboard
AI systems have ‘emergent’ behaviours that often can’t be predicted by their creators. DC Studio/Shutterstock[12]

A claim for negligence, for example, will require that the harm was reasonably foreseeable and actually caused by the conduct of the designer, manufacturer, seller or whoever else might be defendant in a particular case.

But as AI systems continue to advance and become more intelligent, they will almost certainly do things with outcomes that may not have been completely expected or anticipated by their manufacturers, designers, and so on.

This “emergent behaviour[13]” could arise because the AI has become more intelligent than its creators. But it could also reflect self-protective and then self-interested drives or objectives[14] by advanced AI systems.

My own research[15] seeks to highlight a major looming problem in the way we assess liability.

In a hypothetical case in which an AI has caused significant harm, its human and corporate creators may be able to shield themselves from criminal or civil liability.

They could do this by arguing that the damage was not reasonably foreseeable[16] by them, or that the AI’s unexpected actions broke the chain of causation[17] between the conduct of the manufacturer and the loss, damage or harm suffered by the victims.

These would be possible defences to both criminal or civil actions.

So, too, would be the criminal defence argument that what’s called the “fault element[18]” of an offence – intention, knowledge, recklessness or negligence – of the AI system’s designer had not been matched by the necessary “physical element[19]” – which in this instance would have been committed by a machine.

We need to prepare now

Market forces are already driving things rapidly forward in artificial intelligence. To where, exactly, is less certain.

It may turn out that the common law we have now, developed through the courts, is adaptable enough to deal with these new problems. But it’s also possible we’ll find current laws lacking, which could add a sense of injustice to any future disasters.

It will be important to make sure those corporations who have profited most from the development of AI are also made responsible for its costs and consequences if things go wrong.

Preparing to address this problem should be a priority for the courts and governments of all nation states, not just Australia.

References

  1. ^ coach employees (www.hcamag.com)
  2. ^ help doctors (postgraduateeducation.hms.harvard.edu)
  3. ^ research (www.researchgate.net)
  4. ^ social media addiction (theconversation.com)
  5. ^ data breaches (theconversation.com)
  6. ^ cybercrime (theconversation.com)
  7. ^ Michael Dwyer/AP (photos.aap.com.au)
  8. ^ One small update brought down millions of IT systems around the world. It's a timely warning (theconversation.com)
  9. ^ nuclear-level (www.scientificamerican.com)
  10. ^ existential risk (www.bbc.com)
  11. ^ examples (theconversation.com)
  12. ^ DC Studio/Shutterstock (www.shutterstock.com)
  13. ^ emergent behaviour (tedai-sanfrancisco.ted.com)
  14. ^ drives or objectives (www.taylorfrancis.com)
  15. ^ research (www.researchgate.net)
  16. ^ foreseeable (heinonline.org)
  17. ^ causation (books.google.com.au)
  18. ^ fault element (www.ag.gov.au)
  19. ^ physical element (www.ag.gov.au)

Authors: Michael Duffy, Associate Professor, Monash Business School, Director Corporate Law, Organisation and Litigation Research Group (CLOL), Monash University

Read more https://theconversation.com/the-rise-of-the-machine-defendant-whos-to-blame-when-an-ai-makes-mistakes-235019

Active Wear

Business Times

Intuit QuickBooks Launches Australia's Most Advanced Open Banking…

Intuit Australia Pty Limited, subsidiary of Intuit Inc. (NASDAQ: INTU), the global financial technology platform behind I...

Alpha HPA appoints Peter Ware as Chief Operating Officer

Alpha HPA appoints Peter Ware as Chief Operating Officer today, bringing extensive industrial leadership experience to supp...

Australia after the Trump–Xi meeting: sector-by-sector opportunit…

How the U.S.–China thaw could play out across key sectors, with best case / base case / downside scenarios, leading indic...

The Times Features

Crystalbrook Collection Introduces ‘No Rings Attached’: Australia’s First Un-Honeymoon for Couples

Why should newlyweds have all the fun? As Australia’s crude marriage rate falls to a 20-year low, ...

Echoes of the Past: Sue Carter Brings Ancient Worlds to Life at Birli Gallery

Launching November 15 at 6pm at Birli Gallery, Midland, Echoes of the Past marks the highly anti...

Why careless adoption of AI backfires so easily

Artificial intelligence (AI) is rapidly becoming commonplace, despite statistics showing[1] th...

How airline fares are set and should we expect lower fares any time soon?

Airline ticket prices may seem mysterious (why is the same flight one price one day, quite anoth...

What is the American public’s verdict on the first year of Donald Trump’s second term as President?

In short: the verdict is decidedly mixed, leaning negative. Trump’s overall job-approval ra...

A Camping Holiday Used to Be Affordable — Not Any Longer: Why the Cost of Staying at a Caravan Park Is Rising

For generations, the humble camping or caravan holiday has been the backbone of the great Austra...

Australia after the Trump–Xi meeting: sector-by-sector opportunities, risks, and realistic scenarios

How the U.S.–China thaw could play out across key sectors, with best case / base case / downside...

World Kindness Day: Commentary from Kath Koschel, founder of Kindness Factory.

What does World Kindness Day mean to you as an individual, and to the Kindness Factory as an organ...

HoMie opens new Emporium store as a hub for streetwear and community

Melbourne streetwear label HoMie has opened its new store in Emporium Melbourne, but this launch is ...