The Times Australia
Fisher and Paykel Appliances
The Times World News

.

80% of Australians think AI risk is a global priority. The government needs to step up

  • Written by Michael Noetel, Senior Lecturer in Psychology, The University of Queensland
80% of Australians think AI risk is a global priority. The government needs to step up

A new nationally representative survey has revealed Australians are deeply concerned about the risks posed by artificial intelligence (AI). They want the government to take stronger action to ensure its safe development and use.

We conducted the survey[1] in early 2024 and found 80% of Australians believe preventing catastrophic risks from advanced AI systems should be a global priority on par with pandemics and nuclear war.

As AI systems become more capable, decisions about how we develop, deploy and use AI are now critical. The promise of powerful technology may tempt companies – and countries – to race ahead[2] without heeding the risks.

Our findings also reveal a gap between the AI risks that media and government tend to focus on, and the risks Australians think are most important.

Read more: Demand for computer chips fuelled by AI could reshape global politics and security[3]

Public concern about AI risks is growing

The development and use of increasingly powerful AI is still on the rise. Recent releases such as Google’s Gemini[4] and Anthropic’s Claude 3[5] have seemingly near-human level capabilities in professional, medical and legal domains.

But the hype has been tempered by rising levels of public and expert concern. Last year, more than 500 people and organisations made submissions to the Australian government’s Safe and Responsible AI discussion paper[6].

They described AI-related risks such as biased decision making, erosion of trust in democratic institutions through misinformation, and increasing inequality from AI-caused unemployment.

Some are even worried about a particularly powerful AI causing a global catastrophe[7] or human extinction[8]. While this idea is heavily contested, across a series of three[9] large[10] surveys[11], most AI researchers judged there to be at least a 5% chance of superhuman AI being “extremely bad (e.g., human extinction)”.

The potential benefits of AI are considerable. AI is already leading to breakthroughs in biology and medicine[12], and it’s used to control fusion reactors[13], which could one day provide zero-carbon energy. Generative AI improves productivity[14], particularly for learners and students.

However, the speed of progress is raising alarm bells. People worry we aren’t prepared to handle powerful AI systems that could be misused or behave in unintended and harmful ways.

In response to such concerns, the world’s governments are attempting regulation. The European Union has approved a draft AI law[15], the United Kingdom has established an AI safety institute[16], while US President Joe Biden recently signed an executive order to promote safer development and governance of advanced AI[17].

Read more: Who will write the rules for AI? How nations are racing to regulate artificial intelligence[18]

Australians want action to prevent dangerous outcomes from AI

To understand how Australians feel about AI risks and ways to address them, we surveyed a nationally representative sample of 1,141 Australians in January and February 2024.

We found Australians ranked the prevention of “dangerous and catastrophic outcomes from AI” as the number one priority for government action.

Australians are most concerned about AI systems that are unsafe, untrustworthy and misaligned with human values.

Other top worries include AI being used in cyber attacks and autonomous weapons, AI-related unemployment and AI failures causing damage to critical infrastructure.

Strong public support for a new AI regulatory body

Australians expect the government to take decisive action on their behalf. An overwhelming majority (86%) want a new government body dedicated to AI regulation and governance, akin to the Therapeutic Goods Administration[19] for medicines.

Nine in ten Australians also believe the country should play a leading role in international efforts to regulate AI development.

Perhaps most strikingly, two-thirds of Australians would support hitting pause on AI development for six months to allow regulators to catch up.

Read more: I used to work at Google and now I'm an AI researcher. Here's why slowing down AI development is wise[20]

Government plans should meet public expectations

In January 2024, the Australian government published an interim plan for addressing AI risks[21]. It includes strengthening existing laws on privacy, online safety and disinformation. It also acknowledges our currently regulatory frameworks aren’t sufficient.

The interim plan outlines the development of voluntary AI safety standards, voluntary labels on AI materials, and the establishment of an advisory body.

Our survey shows Australians support a more safety-focused, regulation-first approach. This contrasts with the targeted and voluntary approach outlined in the interim plan.

It is challenging to encourage innovation while preventing accidents or misuse. But Australians would prefer the government prioritise preventing dangerous and catastrophic outcomes over “bringing the benefits of AI to everyone”.

Some ways to do this include[22]:

  • establishing an AI safety lab with the technical capacity to audit and/or monitor the most advanced AI systems

  • establishing a dedicated AI regulator

  • defining robust standards and guidelines for responsible AI development

  • requiring independent auditing of high-risk AI systems

  • ensuring corporate liability and redress for AI harms

  • increasing public investment in AI safety research

  • actively engaging the public in shaping the future of AI governance.

Figuring out how to effectively govern AI is one of humanity’s great challenges[23]. Australians are keenly aware of the risks of failure, and want our government to address this challenge without delay.

References

  1. ^ conducted the survey (aigovernance.org.au)
  2. ^ race ahead (www.cold-takes.com)
  3. ^ Demand for computer chips fuelled by AI could reshape global politics and security (theconversation.com)
  4. ^ Google’s Gemini (blog.google)
  5. ^ Anthropic’s Claude 3 (www.anthropic.com)
  6. ^ Safe and Responsible AI discussion paper (consult.industry.gov.au)
  7. ^ a global catastrophe (www.vox.com)
  8. ^ human extinction (forecastingresearch.org)
  9. ^ three (wiki.aiimpacts.org)
  10. ^ large (wiki.aiimpacts.org)
  11. ^ surveys (aiimpacts.org)
  12. ^ breakthroughs in biology and medicine (www.nature.com)
  13. ^ control fusion reactors (www.nature.com)
  14. ^ improves productivity (www.nber.org)
  15. ^ has approved a draft AI law (theconversation.com)
  16. ^ has established an AI safety institute (www.gov.uk)
  17. ^ safer development and governance of advanced AI (www.whitehouse.gov)
  18. ^ Who will write the rules for AI? How nations are racing to regulate artificial intelligence (theconversation.com)
  19. ^ Therapeutic Goods Administration (www.tga.gov.au)
  20. ^ I used to work at Google and now I'm an AI researcher. Here's why slowing down AI development is wise (theconversation.com)
  21. ^ interim plan for addressing AI risks (storage.googleapis.com)
  22. ^ Some ways to do this include (arxiv.org)
  23. ^ humanity’s great challenges (theelders.org)

Read more https://theconversation.com/80-of-australians-think-ai-risk-is-a-global-priority-the-government-needs-to-step-up-225175

Times Magazine

Mapping for Trucks: More Than Directions, It’s Optimisation

Daniel Antonello, General Manager Oceania, HERE Technologies At the end of June this year, Hampden ...

Can bigger-is-better ‘scaling laws’ keep AI improving forever? History says we can’t be too sure

OpenAI chief executive Sam Altman – perhaps the most prominent face of the artificial intellig...

A backlash against AI imagery in ads may have begun as brands promote ‘human-made’

In a wave of new ads, brands like Heineken, Polaroid and Cadbury have started hating on artifici...

Home batteries now four times the size as new installers enter the market

Australians are investing in larger home battery set ups than ever before with data showing the ...

Q&A with Freya Alexander – the young artist transforming co-working spaces into creative galleries

As the current Artist in Residence at Hub Australia, Freya Alexander is bringing colour and creativi...

This Christmas, Give the Navman Gift That Never Stops Giving – Safety

Protect your loved one’s drives with a Navman Dash Cam.  This Christmas don’t just give – prote...

The Times Features

The rise of chatbot therapists: Why AI cannot replace human care

Some are dubbing AI as the fourth industrial revolution, with the sweeping changes it is propellin...

Australians Can Now Experience The World of Wicked Across Universal Studios Singapore and Resorts World Sentosa

This holiday season, Resorts World Sentosa (RWS), in partnership with Universal Pictures, Sentosa ...

Mineral vs chemical sunscreens? Science shows the difference is smaller than you think

“Mineral-only” sunscreens are making huge inroads[1] into the sunscreen market, driven by fears of “...

Here’s what new debt-to-income home loan caps mean for banks and borrowers

For the first time ever, the Australian banking regulator has announced it will impose new debt-...

Why the Mortgage Industry Needs More Women (And What We're Actually Doing About It)

I've been in fintech and the mortgage industry for about a year and a half now. My background is i...

Inflation jumps in October, adding to pressure on government to make budget savings

Annual inflation rose[1] to a 16-month high of 3.8% in October, adding to pressure on the govern...

Transforming Addiction Treatment Marketing Across Australasia & Southeast Asia

In a competitive and highly regulated space like addiction treatment, standing out online is no sm...

Aiper Scuba X1 Robotic Pool Cleaner Review: Powerful Cleaning, Smart Design

If you’re anything like me, the dream is a pool that always looks swimmable without you having to ha...

YepAI Emerges as AI Dark Horse, Launches V3 SuperAgent to Revolutionize E-commerce

November 24, 2025 – YepAI today announced the launch of its V3 SuperAgent, an enhanced AI platf...