The Times Australia
Mirvac Harbourside
The Times World News

.

How Australia’s new AI ‘guardrails’ can clean up the messy market for artificial intelligence

  • Written by Nicholas Davis, Industry Professor of Emerging Technology and Co-Director, Human Technology Institute, University of Technology Sydney

Australia’s federal government has today launched a proposed set of mandatory guardrails for high-risk AI[1] alongside a voluntary safety standard[2] for organisations using AI.

Each of these documents offer ten mutually reinforcing guardrails that set clear expectations for organisations across the AI supply chain. They are relevant for all organisations using AI, including internal systems aimed at boosting employee efficiency and externally-facing systems such as chatbots.

Most of the guardrails relate to things like accountability, transparency, record-keeping and making sure humans are overseeing AI systems in a meaningful way. They are aligned with emerging international standards such as the ISO standard for AI management[3] and the European Union’s AI Act[4].

The proposals for mandatory requirements for high-risk AI – which are open to public submissions[5] for the next month – recognise that AI systems are special in ways that limit the ability of existing laws to effectively prevent or mitigate a wide range of harms to Australians. While defining precisely what constitutes a high-risk setting is a core part of the consultation, the proposed principle-based approach would likely capture any systems that have a legal effect. Examples might include AI recruitment systems, systems that may limit human rights (including some facial recognition systems), and any systems that can cause physical harm, such as autonomous vehicles.

Well-designed guardrails will improve technology and make us all better off. On this front, the government should accelerate law reform efforts to clarify existing rules and improve both transparency and accountability in the market. At the same time, we don’t need to – nor should we – wait for the government to act.

The AI market is a mess

As it stands, the market for AI products and services is a mess. The central problem is that people don’t know how AI systems work, when they’re using them, and whether the output helps or hurts them.

Take, for example, a company that recently asked my advice on a generative AI service projected to cost hundreds of thousands of dollars each year. It was worried about falling behind competitors and having difficulty choosing between vendors.

Yet, in the first 15 minutes of discussion, the company revealed it had no reliable information around the potential benefit for the business, and no knowledge of existing generative AI use by its teams.

It’s important we get this right. If you believe even a fraction of the hype, AI represents a huge opportunity for Australia. Estimates referenced by the federal government[6] suggest the economic boost from AI and automation could be up to A$600 billion every year by 2030. This would lift our GDP to 25% above 2023 levels.

But all of this is at risk. The evidence is in the alarmingly high failure rates of AI projects (above 80% by some estimates[7]), an array of reckless rollouts, low levels of citizen trust[8] and the prospect of thousands of Robodebt-esque crises across both industry and government.

The information asymmetry problem

A lack of skills and experience among decision-makers is undoubtedly part of the problem. But the rapid pace of innovation in AI is supercharging another challenge: information asymmetry.

Information asymmetry is a simple, Nobel prize-winning economic concept[9] with serious implications for everyone. And it’s a particularly pernicious challenge when it comes to AI.

When buyers and sellers have uneven knowledge about a product or service, it doesn’t just mean one party gains at the other’s expense. It can lead to poor-quality goods dominating the market, and even the market failing entirely.

AI creates information asymmetries in spades. AI models are technical and complex, they are often embedded and hidden inside other systems, and they are increasingly being used to make important choices.

Balancing out these asymmetries should deeply concern all of us. Boards, executives and shareholders want AI investments to pay off. Consumers want systems that work in their interests. And we all want to enjoy the benefits of economic expansion while avoiding the very real harms AI systems can inflict if they fail, or if they are used maliciously or deployed inappropriately.

In the short term, at least, companies selling AI gain a real benefit from restricting information so they can do deals with naïve counterparties. Solving this problem will require more than upskilling. It means using a range of tools and incentives to gather and share accurate, timely and important information about AI systems.

What businesses can do today

Now is the time to act. Businesses across Australia can pick up the Voluntary AI Safety Standard[10] (or the International Standard Organisation’s version[11]) and start gathering and documenting the information they need to make better decisions about AI today.

This will help in two ways. First, it will help businesses to take a structured approach to understanding and governing their own use of AI systems, to ask useful questions to (and demand answers from) their technology partners, and to signal to the market that their AI use is trustworthy.

Second, as more and more businesses adopt the standard, Australian and international vendors and deployers will feel market pressure to ensure their products and services are fit for purpose. In turn, it will become cheaper and easier for all of us to know whether the AI system we’re buying, relying on or being judged by actually serves our needs.

Clearing a path

Australian consumers and businesses both want AI to be safe and responsible. But we urgently need to close the huge gap that exists between aspiration and practice.

The National AI Centre’s Responsible AI index[12] shows that while 78% of organisations believed they were developing and deploying AI systems responsibly, only 29% of organisations were applying actual practices towards this end.

Safe and responsible AI is where good governance meets good business practice and human-centred technology. In the bigger picture, it’s also about ensuring that innovation thrives in a well-functioning market. On both these fronts, standards can help us clear a path through the clutter.

References

  1. ^ mandatory guardrails for high-risk AI (consult.industry.gov.au)
  2. ^ voluntary safety standard (www.industry.gov.au)
  3. ^ ISO standard for AI management (www.iso.org)
  4. ^ European Union’s AI Act (artificialintelligenceact.eu)
  5. ^ public submissions (consult.industry.gov.au)
  6. ^ Estimates referenced by the federal government (storage.googleapis.com)
  7. ^ above 80% by some estimates (www.rand.org)
  8. ^ low levels of citizen trust (kpmg.com)
  9. ^ Nobel prize-winning economic concept (www.nobelprize.org)
  10. ^ Voluntary AI Safety Standard (www.industry.gov.au)
  11. ^ International Standard Organisation’s version (www.iso.org)
  12. ^ Responsible AI index (url.au.m.mimecastprotect.com)

Read more https://theconversation.com/how-australias-new-ai-guardrails-can-clean-up-the-messy-market-for-artificial-intelligence-238307

Mirvac Harbourside

Times Magazine

YepAI Joins Victoria's AI Trade Mission to Singapore for Big Data & AI World Asia 2025

YepAI, a Melbourne-based leader in enterprise artificial intelligence solutions, announced today...

Building a Strong Online Presence with Katoomba Web Design

Katoomba web design is more than just creating a website that looks good—it’s about building an onli...

September Sunset Polo

International Polo Tour To Bridge Historic Sport, Life-Changing Philanthropy, and Breath-Taking Beau...

5 Ways Microsoft Fabric Simplifies Your Data Analytics Workflow

In today's data-driven world, businesses are constantly seeking ways to streamline their data anal...

7 Questions to Ask Before You Sign IT Support Companies in Sydney

Choosing an IT partner can feel like buying an insurance policy you hope you never need. The right c...

Choosing the Right Legal Aid Lawyer in Sutherland Shire: Key Considerations

Legal aid services play an essential role in ensuring access to justice for all. For people in t...

The Times Features

Macquarie Bank Democratises Agentic AI, Scaling Customer Innovation with Gemini Enterprise

Macquarie’s Banking and Financial Services group (Macquarie Bank), in collaboration with Google ...

Do kids really need vitamin supplements?

Walk down the health aisle of any supermarket and you’ll see shelves lined with brightly packa...

Why is it so shameful to have missing or damaged teeth?

When your teeth and gums are in good condition, you might not even notice their impact on your...

Australian travellers at risk of ATM fee rip-offs according to new data from Wise

Wise, the global technology company building the smartest way to spend and manage money internat...

Does ‘fasted’ cardio help you lose weight? Here’s the science

Every few years, the concept of fasted exercise training pops up all over social media. Faste...

How Music and Culture Are Shaping Family Road Trips in Australia

School holiday season is here, and Aussies aren’t just hitting the road - they’re following the musi...

The Role of Spinal Physiotherapy in Recovery and Long-Term Wellbeing

Back pain and spinal conditions are among the most common reasons people seek medical support, oft...

Italian Lamb Ragu Recipe: The Best Ragù di Agnello for Pasta

Ciao! It’s Friday night, and the weekend is calling for a little Italian magic. What’s better than t...

It’s OK to use paracetamol in pregnancy. Here’s what the science says about the link with autism

United States President Donald Trump has urged pregnant women[1] to avoid paracetamol except in ...