The Times Australia
The Times World News

.

The government says more people need to use AI. Here’s why that’s wrong

  • Written by Erica Mealy, Lecturer in Computer Science, University of the Sunshine Coast



The Australian government this week released voluntary artificial intelligence (AI) safety standards[1], alongside a proposals paper[2] calling for greater regulation of the use of the fast-growing technology in high-risk situations.

The take-home message[3] from federal Minister for Industry and Science, Ed Husic, was:

We need more people to use AI and to do that we need to build trust.

But why exactly do people need to trust this technology? And why exactly do more people need to use it?

AI systems are trained on incomprehensibly large data sets using advanced mathematics most people don’t understand. They produce results we have no way of verifying. Even flagship, state-of-the-art systems produce output riddled with errors.

ChatGPT appears to be growing less accurate over time[4]. Even at its best it can’t tell you what letters[5] are in the word “strawberry”. Meanwhile, Google’s Gemini chatbot has recommended putting glue on pizza[6], among other comical failures.

Given all this, public distrust of AI seems entirely reasonable. The case for using more of it seems quite weak – and also potentially dangerous.

Man speaks to the media at a press conference.
Federal Minister for Industry and Science Ed Husic wants more people to use AI. Mick Tsikas/AAP[7]

AI risks

Much has been made of the “existential threat” of AI[8], and how it will lead to job losses. The harms AI presents range from the overt – such as autonomous vehicles that hit pedestrians[9] – to the more subtle, such as AI recruitment systems that demonstrate bias against women[10] or AI legal system tools with a bias against people of colour[11].

Other harms include fraud from deepfakes of coworkers[12] and of loved ones[13].

Never mind that the federal government’s own recent reporting[14] showed humans are more effective, efficient and productive than AI.

But if all you have is a hammer[15], everything looks like a nail.

Technology adoption still falls into this familiar trope. AI is not always the best tool for the job[16]. But when faced with an exciting new technology, we often use it without considering if we should.

Instead of encouraging more people to use AI, we should all learn what is a good, and not good, use of AI.

Is it the technology we need to trust – or the government?

Just what does the Australian government get from more people using AI?

One of the largest risks is the leaking of private data[17]. These tools are collecting our private information, our intellectual property and our thoughts on a scale we have never before seen.

Much of this data, in the case of ChatGPT, Google Gemini, Otter.ai and other AI models, is not processed onshore in Australia.

These companies preach transparency, privacy and security[18]. But it is often hard to uncover if your data is used[19] for training their newer models, how they secure it[20], or what other organisations or governments have access to that data.

Recently, federal Minister for Government Services, Bill Shorten, presented the government’s proposed Trust Exchange program, which raised concerns about the collection of even more data about Australian citizens[21]. In his speech to the National Press Club, Shorten openly noted the support from large technology companies, including Google[22].

If data about Australians was to be collated across different technology platforms, including AI, we could see widespread mass surveillance.

But even more worryingly, we have observed the power of technology to influence politics[23] and behaviour.

Automation bias[24] is the terminology we use for the tendency for users to believe the technology is “smarter” then they are. Too much trust in AI poses even more risk to Australians – by encouraging more use of technology without adequate education, we could be subjecting our population to a comprehensive system of automated surveillance and control.

And although you might be able to escape this system, it would undermine social trust and cohesion and influence people without them knowing.

These factors are even more reason to regulate the use of AI, as the Australian government is now looking to do. But doing so does not have to be accompanied by a forceful encouragement to also use it.

Let’s dial down the blind hype

The topic of AI regulation is important.

The International Organisation for Standardisation has established a standard on the use and management of AI systems[25]. Its implementation in Australia would lead to better, more well-reasoned and regulated use of AI.

This standard and others are the foundation of the government’s proposed Voluntary AI Safety standard.

What was problematic in this week’s announcement from the federal government was not the call for greater regulation, but the blind hyping of AI use.

Let’s focus on protecting Australians – not on mandating their need to use, and trust, AI.

References

  1. ^ voluntary artificial intelligence (AI) safety standards (www.industry.gov.au)
  2. ^ proposals paper (business.gov.au)
  3. ^ take-home message (www.minister.industry.gov.au)
  4. ^ ChatGPT appears to be growing less accurate over time (decrypt.co)
  5. ^ can’t tell you what letters (www.inc-aus.com)
  6. ^ Google’s Gemini chatbot has recommended putting glue on pizza (www.forbes.com)
  7. ^ Mick Tsikas/AAP (photos.aap.com.au)
  8. ^ “existential threat” of AI (www.scientificamerican.com)
  9. ^ autonomous vehicles that hit pedestrians (www.reuters.com)
  10. ^ AI recruitment systems that demonstrate bias against women (www.reuters.com)
  11. ^ tools with a bias against people of colour (www.propublica.org)
  12. ^ coworkers (edition.cnn.com)
  13. ^ loved ones (www.newyorker.com)
  14. ^ federal government’s own recent reporting (ia.acs.org.au)
  15. ^ all you have is a hammer (everydayconcepts.io)
  16. ^ AI is not always the best tool for the job (digileaders.com)
  17. ^ leaking of private data (link.springer.com)
  18. ^ transparency, privacy and security (otter.ai)
  19. ^ your data is used (botpress.com)
  20. ^ how they secure it (cybersecurity.att.com)
  21. ^ collection of even more data about Australian citizens (efa.org.au)
  22. ^ including Google (ministers.dss.gov.au)
  23. ^ influence politics (theconversation.com)
  24. ^ Automation bias (www.forbes.com)
  25. ^ use and management of AI systems (www.iso.org)

Read more https://theconversation.com/the-government-says-more-people-need-to-use-ai-heres-why-thats-wrong-238327

Times Magazine

DIY Is In: How Aussie Parents Are Redefining Birthday Parties

When planning his daughter’s birthday, Rich opted for a DIY approach, inspired by her love for drawing maps and giving clues. Their weekend tradition of hiding treats at home sparked the idea, and with a pirate ship playground already chosen as t...

When Touchscreens Turn Temperamental: What to Do Before You Panic

When your touchscreen starts acting up, ignoring taps, registering phantom touches, or freezing entirely, it can feel like your entire setup is falling apart. Before you rush to replace the device, it’s worth taking a deep breath and exploring what c...

Why Social Media Marketing Matters for Businesses in Australia

Today social media is a big part of daily life. All over Australia people use Facebook, Instagram, TikTok , LinkedIn and Twitter to stay connected, share updates and find new ideas. For businesses this means a great chance to reach new customers and...

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

Data Management Isn't Just About Tech—Here’s Why It’s a Human Problem Too

Photo by Kevin Kuby Manuel O. Diaz Jr.We live in a world drowning in data. Every click, swipe, medical scan, and financial transaction generates information, so much that managing it all has become one of the biggest challenges of our digital age. Bu...

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Times Features

How to Choose a Cosmetic Clinic That Aligns With Your Aesthetic Goals

Clinics that align with your goals prioritise subtlety, safety, and client input Strong results come from experience, not trends or treatment bundles A proper consultation fe...

7 Non-Invasive Options That Can Subtly Enhance Your Features

Non-invasive treatments can refresh your appearance with minimal downtime Options range from anti-wrinkle treatments to advanced skin therapies Many results appear gradually ...

What is creatine? What does the science say about its claims to build muscle and boost brain health?

If you’ve walked down the wellness aisle at your local supermarket recently, or scrolled the latest wellness trends on social media, you’ve likely heard about creatine. Creati...

Whole House Water Filters: Essential or Optional for Australian Homes?

Access to clean, safe water is something most Australians take for granted—but the reality can be more complex. Our country’s unique climate, frequent droughts, and occasional ...

How Businesses Turn Data into Actionable Insights

In today's digital landscape, businesses are drowning in data yet thirsting for meaningful direction. The challenge isn't collecting information—it's knowing how to turn data i...

Why Mobile Allied Therapy Services Are Essential in Post-Hospital Recovery

Mobile allied health services matter more than ever under recent NDIA travel funding cuts. A quiet but critical shift is unfolding in Australia’s healthcare landscape. Mobile all...