The Times Australia
The Times Australia

.

Distrust in AI is on the rise – but along with healthy scepticism comes the risk of harm

  • Written by Simon Coghlan, Senior Lecturer in Digital Ethics, Deputy Director Centre for AI and Digital Ethics, School of Computing and Information Systems, The University of Melbourne

Some video game players recently criticised[1] the cover art on a new video game for being generated with artificial intelligence (AI). Yet the cover art for Little Droid, which also featured in the game’s launch trailer[2] on YouTube, was not concocted by AI. It was, the developers claim, carefully designed by a human artist.

Surprised by the attacks on “AI slop[3]”, the studio Stamina Zero posted a video showing earlier versions of the artist’s handiwork. But while some accepted this evidence, others remained sceptical.

In addition, several players felt that even if the Little Droid cover art was human made, it nonetheless resembled AI-generated work.

However, some art is deliberately designed to have the futuristic glossy appearance associated with image generators like Midjourney, DALL-E, and Stable Diffusion.

Stamina Zero published a video showing the steps the artist took to create the cover art.

It’s becoming increasingly easy for images, videos or audio made with AI to be deceptively passed off[4] as authentic or human made. The twist in cases like Little Droid is that what is human or “real” may be incorrectly perceived as machine generated – resulting in misplaced backlash.

Such cases highlight the increasing problem of the balance of trust and distrust in the generative AI era. In this new world, both cynicism and gullibility about what we encounter online are potential problems – and can lead to harm.

Wrongful accusations

This issue extends well beyond gaming. There are growing criticisms[5] of AI being used to generate and publish music on platforms like Spotify.

Yet as a result, some indie music artists have been wrongfully accused[6] of generating AI music, resulting in damage to their burgeoning careers as musicians.

In 2023, an Australian photographer was wrongly disqualified[7] from a photo contest due to the erroneous judgement her entry was produced by artificial intelligence.

Writers, including students submitting essays, can also be falsely accused[8] of sneakily using AI. Currently available AI detection tools are far from foolproof[9] – and some argue they may never be entirely reliable[10].

Recent discussions[11] have drawn attention to common characteristics of AI writing, including the em dash – which, as authors, we often employ ourselves.

Given that text from systems like ChatGPT has characteristic features, writers face a difficult decision: should they continue writing in their own style and risk being accused of using AI, or should they try to write differently?

Read more: Google's SynthID is the latest tool for catching AI-made content. What is AI 'watermarking' and does it work?[12]

The delicate balance of trust and distrust

Graphic designers, voice actors and many others are rightly worried about AI replacing them[13]. They are also understandably concerned about tech companies using their labour[14] to train AI models without consent, credit or compensation[15].

There are further ethical concerns that AI-generated images threaten Indigenous inclusion[16] by erasing cultural nuances and challenging Indigenous cultural and intellectual property rights.

At the same time, the cases above illustrate the risks of rejecting authentic human effort and creativity due to a false belief it is AI. This too can be unfair. People wrongly accused of using AI can suffer[17] emotional, financial and reputational harm.

On the one hand, being fooled that AI content is authentic is a problem. Consider deepfakes[18], bogus videos and false images of politicians or celebrities. AI content purporting to be real can be linked to scams and dangerous misinformation[19].

On the other hand, mistakenly distrusting authentic content[20] is also a problem. For example, rejecting the authenticity of a video of war crimes or hate speech by politicians – based on the mistaken or deliberate belief that the content was AI generated – can lead to great harm and injustice.

Unfortunately, the growth of dubious content allows unscrupulous individuals to claim that video, audio or images exposing real wrongdoing are fake[21].

As distrust increases, democracy and social cohesion[22] may begin to fray. Given the potential consequences, we must be wary of excessive scepticism about the origin or provenance of online content.

A path forward

AI is a cultural and social technology[23]. It mediates and shapes our relationships with one another, and has potentially transformational effects on how we learn and share information.

The fact that AI is challenging our trust relationships with companies, content and each other is not surprising. And people are not always to blame when they are fooled[24] by AI-manufactured material. Such outputs are increasingly realistic[25].

Furthermore, the responsibility to avoid deception should not fall entirely on internet users and the public. Digital platforms, AI developers, tech companies and producers of AI material should be held accountable through regulation and transparency requirements[26] around AI use.

Even so, internet users will still need to adapt. The need to exercise a balanced and fair sense of scepticism toward online material is becoming more urgent.

This means adopting the right level of trust and distrust[27] in digital environments.

The philosopher Aristotle spoke of practical wisdom[28]. Through experience, education and practice, a practically wise person develops skills to judge well in life. Because they tend to avoid poor judgement, including excessive scepticism and naivete, the practically wise person is better able to flourish and do well by others.

We need to hold tech companies and platforms to account for harm and deception caused by AI. We also need to educate ourselves, our communities, and the next generation to judge well and develop some practical wisdom[29] in a world awash with AI content.

References

  1. ^ recently criticised (www.theguardian.com)
  2. ^ launch trailer (www.youtube.com)
  3. ^ AI slop (theconversation.com)
  4. ^ deceptively passed off (www.esafety.gov.au)
  5. ^ growing criticisms (www.unimelb.edu.au)
  6. ^ wrongfully accused (www.theguardian.com)
  7. ^ wrongly disqualified (news.artnet.com)
  8. ^ falsely accused (andrewggibson.com)
  9. ^ are far from foolproof (www.zdnet.com)
  10. ^ they may never be entirely reliable (theconversation.com)
  11. ^ Recent discussions (www.rollingstone.com)
  12. ^ Google's SynthID is the latest tool for catching AI-made content. What is AI 'watermarking' and does it work? (theconversation.com)
  13. ^ replacing them (dl.acm.org)
  14. ^ tech companies using their labour (thecon.ai)
  15. ^ without consent, credit or compensation (theconversation.com)
  16. ^ threaten Indigenous inclusion (theconversation.com)
  17. ^ wrongly accused of using AI can suffer (andrewggibson.com)
  18. ^ deepfakes (www.bbc.co.uk)
  19. ^ linked to scams and dangerous misinformation (theconversation.com)
  20. ^ authentic content (www.bbc.com)
  21. ^ are fake (politicalsciencenow.com)
  22. ^ democracy and social cohesion (theconversation.com)
  23. ^ cultural and social technology (www.science.org)
  24. ^ fooled (neurosciencenews.com)
  25. ^ increasingly realistic (journals.sagepub.com)
  26. ^ transparency requirements (www.austlii.edu.au)
  27. ^ trust and distrust (dl.acm.org)
  28. ^ practical wisdom (plato.stanford.edu)
  29. ^ practical wisdom (global.oup.com)

Read more https://theconversation.com/distrust-in-ai-is-on-the-rise-but-along-with-healthy-scepticism-comes-the-risk-of-harm-260189

Temu’s Local Seller Program opens fully in Australia

Local businesses of all sizes across Australia now have a new, low-cost channel to reach millions of online sh...

Times Magazine

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

Data Management Isn't Just About Tech—Here’s Why It’s a Human Problem Too

Photo by Kevin Kuby Manuel O. Diaz Jr.We live in a world drowning in data. Every click, swipe, medical scan, and financial transaction generates information, so much that managing it all has become one of the biggest challenges of our digital age. Bu...

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Decline of Hyper-Casual: How Mid-Core Mobile Games Took Over in 2025

In recent years, the mobile gaming landscape has undergone a significant transformation, with mid-core mobile games emerging as the dominant force in app stores by 2025. This shift is underpinned by changing user habits and evolving monetization tr...

Understanding ITIL 4 and PRINCE2 Project Management Synergy

Key Highlights ITIL 4 focuses on IT service management, emphasising continual improvement and value creation through modern digital transformation approaches. PRINCE2 project management supports systematic planning and execution of projects wit...

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

The Times Features

Is our mental health determined by where we live – or is it the other way round? New research sheds more light

Ever felt like where you live is having an impact on your mental health? Turns out, you’re not imagining things. Our new analysis[1] of eight years of data from the New Zeal...

Going Off the Beaten Path? Here's How to Power Up Without the Grid

There’s something incredibly freeing about heading off the beaten path. No traffic, no crowded campsites, no glowing screens in every direction — just you, the landscape, and the...

West HQ is bringing in a season of culinary celebration this July

Western Sydney’s leading entertainment and lifestyle precinct is bringing the fire this July and not just in the kitchen. From $29 lobster feasts and award-winning Asian banque...

What Endo Took and What It Gave Me

From pain to purpose: how one woman turned endometriosis into a movement After years of misdiagnosis, hormone chaos, and major surgery, Jo Barry was done being dismissed. What beg...

Why Parents Must Break the Silence on Money and Start Teaching Financial Skills at Home

Australia’s financial literacy rates are in decline, and our kids are paying the price. Certified Money Coach and Financial Educator Sandra McGuire, who has over 20 years’ exp...

Australia’s Grill’d Transforms Operations with Qlik

Boosting Burgers and Business Clean, connected data powers real-time insights, smarter staffing, and standout customer experiences Sydney, Australia, 14 July 2025 – Qlik®, a g...