The Times Australia

The Times World News
The Times

AI has potential to revolutionise health care – but we must first confront the risk of algorithmic bias

  • Written by Mangor Pedersen, Associate Professor of Psychology and Neuroscience, Auckland University of Technology
AI has potential to revolutionise health care – but we must first confront the risk of algorithmic bias

Artificial Intelligence (AI) is moving fast and will become an important support tool in clinical care. Research suggests AI algorithms can accurately detect melanomas[1] and predict future breast cancers[2].

But before AI can be integrated into routine clinical use, we must address the challenge of algorithmic bias. AI algorithms may have inherent biases that could lead to discrimination and privacy issues. AI systems could also be making decisions without the required oversight or human input.

An example of the potentially harmful effects of AI comes from an international project[3] which aims to use AI to save lives by developing breakthrough medical treatments. In an experiment, the team reversed their “good” AI model to create options for a new AI model to do “harm”.

In less than six hours of training, the reversed AI algorithm generated tens of thousands of potential chemical warfare agents, with many more dangerous than current warfare agents. This is an extreme example concerning chemical compounds, but it serves as a wake-up call to evaluate AI’s known and conceivably unknowable ethical consequences.

AI in clinical care

In medicine, we deal with people’s most private data and often life-changing decisions. Robust AI ethics frameworks are imperative.

The Australian Epilepsy Project[4] aims to improve people’s lives and make clinical care more widely available. Based on advanced brain imaging, genetic and cognitive information from thousands of people with epilepsy, we plan to use AI to answer currently unanswerable questions[5].

Will this person’s seizures continue? Which medicine is most effective? Is brain surgery a viable treatment option? These are fundamental questions that modern medicine struggles to address.

As the AI lead of this project, my main concern is that AI is moving fast and regulatory oversight is minimal. These issues are why we recently established an ethical framework[6] for using AI as a clinical support tool. This framework intends to ensure our AI technologies are open, safe and trustworthy, while fostering inclusivity and fairness in clinical care.

Read more: AI is transforming medicine – but it can only work with proper sharing of data[7]

So how do we implement AI ethics in medicine to reduce bias and retain control over algorithms? The computer science principle “garbage in, garbage out” applies to AI. Suppose we collect biased data from small samples. Our AI algorithms will likely be biased and not replicable in another clinical setting.

Examples of biases are not hard to find in contemporary AI models. Popular large language models (ChatGPT for example) and latent diffusion models (DALL-E and Stable Diffusion) show how explicit biases[8] regarding gender, ethnicity and socioeconomic status can occur.

Researchers found that simple user prompts generate images perpetuating ethnic, gendered and class stereotypes. For example, a prompt for a doctor generates mostly[9] images of male doctors, which is inconsistent with reality as about half of all doctors in OECD countries are female.

Safe implementation of medical AI

The solution to preventing bias and discrimination is not trivial. Enabling health equality and fostering inclusivity in clinical studies are likely among the primary solutions[10] to combating biases in medical AI.

Encouragingly, the US Food and Drug Administration recently proposed making diversity mandatory[11] in clinical trials. This proposal represents a move towards less biased and community-based clinical studies.

Another obstacle to progress is limited research funding. AI algorithms typically require substantial amounts of data, which can be expensive. It is crucial to establish enhanced funding mechanisms that provide researchers with the necessary resources to gather clinically relevant data appropriate for AI applications.

We also argue we should always know the inner workings of AI algorithms and understand how they reach their conclusions and recommendations. This concept is often referred to as “explainability” in AI. It relates to the idea that humans and machines must work together for optimal results.

We prefer to view the implementation of prediction in models as “augmented” rather than “artificial” intelligence – algorithms should be part of the process and the medical professions must remain in control of the decision making.

Read more: Biased AI can be bad for your health – here's how to promote algorithmic fairness[12]

In addition to encouraging the use of explainable algorithms, we support transparent and open science. Scientists should publish details of AI models and their methodology to enhance transparency and reproducibility.

What do we need in Aotearoa New Zealand to ensure the safe implementation of AI in medical care? AI ethics concerns are primarily led by experts within the field. However, targeted AI regulations, such as the EU-based Artificial Intelligence Act[13] have been proposed, addressing these ethical considerations.

The European AI law is welcomed and will protect people working within “safe AI”. The UK government recently released their proactive approach to AI regulation[14], serving as a blueprint for other government responses to AI safety.

In Aotearoa, we argue for adopting a proactive rather than reactive stance to AI safety. It will establish an ethical framework for using AI in clinical care and other fields, yielding interpretable, secure and unbiased AI. Consequently, our confidence will grow that this powerful technology benefits society while safeguarding it from harm.

Read more https://theconversation.com/ai-has-potential-to-revolutionise-health-care-but-we-must-first-confront-the-risk-of-algorithmic-bias-204112

Final budget outcome shows 2023-24 surplus of $15.8 billion

The budget surplus for last financial year has come in at $15.8 billion, well exceeding the $9.3 b...

Times Lifestyle

The Jewish International Film Festival

JEWISH INTERNATIONAL FILM FESTIVAL BRINGS GLOBAL STORIES TO AUSTRALIAN SCREENS 2024 JIFF Program Announced The Jewish...

Warning to Grey Nomads - Pop Top Caravan Hidden Risks

To pop or not to pop… that is the question. Hybrid pop top caravans are a popular choice for many caravanners, but ar...

How to Ensure You Don’t Miss Out on a Ticket for the Next Huge Ev…

It can be a moment of huge excitement when a concert or huge event is announced to be coming to a nearby venue. There are l...

Times Magazine

Elevate Your Off-Road Experience with Ozzytyres’ 4x4 Wheel and Tyre Packages

The right wheel and tyre package can make all the difference between a thrilling adventure and a frustrating experience. An extensive range of high-quality 4x4 wheel and tyre packages from Ozzytyres can help you. They are designed to elevate your v...

What to Expect at Our Ultimate Indoor Golfing Venue in Rockingham

Here, dear gentlemen, is what the future of golfing looks like in Rockingham! This dream place for those who want to play golf in any weather or at any time of the day will become our ultimate indoor golfing venue. Envision a scenario where one is ...

The Power of Tech in Business and How Mobile Solutions are Changing the Game

Technology is not just an option but a necessity, particularly in today’s fast-paced business world. From mobile apps to cloud-based accounting software, businesses are now more tech-driven than ever. Whether you are running a small local operation...