Distrust in AI is on the rise – but along with healthy scepticism comes the risk of harm
- Written by Simon Coghlan, Senior Lecturer in Digital Ethics, Deputy Director Centre for AI and Digital Ethics, School of Computing and Information Systems, The University of Melbourne

Some video game players recently criticised[1] the cover art on a new video game for being generated with artificial intelligence (AI). Yet the cover art for Little Droid, which also featured in the game’s launch trailer[2] on YouTube, was not concocted by AI. It was, the developers claim, carefully designed by a human artist.
Surprised by the attacks on “AI slop[3]”, the studio Stamina Zero posted a video showing earlier versions of the artist’s handiwork. But while some accepted this evidence, others remained sceptical.
In addition, several players felt that even if the Little Droid cover art was human made, it nonetheless resembled AI-generated work.
However, some art is deliberately designed to have the futuristic glossy appearance associated with image generators like Midjourney, DALL-E, and Stable Diffusion.
Stamina Zero published a video showing the steps the artist took to create the cover art.It’s becoming increasingly easy for images, videos or audio made with AI to be deceptively passed off[4] as authentic or human made. The twist in cases like Little Droid is that what is human or “real” may be incorrectly perceived as machine generated – resulting in misplaced backlash.
Such cases highlight the increasing problem of the balance of trust and distrust in the generative AI era. In this new world, both cynicism and gullibility about what we encounter online are potential problems – and can lead to harm.
Wrongful accusations
This issue extends well beyond gaming. There are growing criticisms[5] of AI being used to generate and publish music on platforms like Spotify.
Yet as a result, some indie music artists have been wrongfully accused[6] of generating AI music, resulting in damage to their burgeoning careers as musicians.
In 2023, an Australian photographer was wrongly disqualified[7] from a photo contest due to the erroneous judgement her entry was produced by artificial intelligence.
Writers, including students submitting essays, can also be falsely accused[8] of sneakily using AI. Currently available AI detection tools are far from foolproof[9] – and some argue they may never be entirely reliable[10].
Recent discussions[11] have drawn attention to common characteristics of AI writing, including the em dash – which, as authors, we often employ ourselves.
Given that text from systems like ChatGPT has characteristic features, writers face a difficult decision: should they continue writing in their own style and risk being accused of using AI, or should they try to write differently?
Read more: Google's SynthID is the latest tool for catching AI-made content. What is AI 'watermarking' and does it work?[12]
The delicate balance of trust and distrust
Graphic designers, voice actors and many others are rightly worried about AI replacing them[13]. They are also understandably concerned about tech companies using their labour[14] to train AI models without consent, credit or compensation[15].
There are further ethical concerns that AI-generated images threaten Indigenous inclusion[16] by erasing cultural nuances and challenging Indigenous cultural and intellectual property rights.
At the same time, the cases above illustrate the risks of rejecting authentic human effort and creativity due to a false belief it is AI. This too can be unfair. People wrongly accused of using AI can suffer[17] emotional, financial and reputational harm.
On the one hand, being fooled that AI content is authentic is a problem. Consider deepfakes[18], bogus videos and false images of politicians or celebrities. AI content purporting to be real can be linked to scams and dangerous misinformation[19].
On the other hand, mistakenly distrusting authentic content[20] is also a problem. For example, rejecting the authenticity of a video of war crimes or hate speech by politicians – based on the mistaken or deliberate belief that the content was AI generated – can lead to great harm and injustice.
Unfortunately, the growth of dubious content allows unscrupulous individuals to claim that video, audio or images exposing real wrongdoing are fake[21].
As distrust increases, democracy and social cohesion[22] may begin to fray. Given the potential consequences, we must be wary of excessive scepticism about the origin or provenance of online content.
A path forward
AI is a cultural and social technology[23]. It mediates and shapes our relationships with one another, and has potentially transformational effects on how we learn and share information.
The fact that AI is challenging our trust relationships with companies, content and each other is not surprising. And people are not always to blame when they are fooled[24] by AI-manufactured material. Such outputs are increasingly realistic[25].
Furthermore, the responsibility to avoid deception should not fall entirely on internet users and the public. Digital platforms, AI developers, tech companies and producers of AI material should be held accountable through regulation and transparency requirements[26] around AI use.
Even so, internet users will still need to adapt. The need to exercise a balanced and fair sense of scepticism toward online material is becoming more urgent.
This means adopting the right level of trust and distrust[27] in digital environments.
The philosopher Aristotle spoke of practical wisdom[28]. Through experience, education and practice, a practically wise person develops skills to judge well in life. Because they tend to avoid poor judgement, including excessive scepticism and naivete, the practically wise person is better able to flourish and do well by others.
We need to hold tech companies and platforms to account for harm and deception caused by AI. We also need to educate ourselves, our communities, and the next generation to judge well and develop some practical wisdom[29] in a world awash with AI content.
References
- ^ recently criticised (www.theguardian.com)
- ^ launch trailer (www.youtube.com)
- ^ AI slop (theconversation.com)
- ^ deceptively passed off (www.esafety.gov.au)
- ^ growing criticisms (www.unimelb.edu.au)
- ^ wrongfully accused (www.theguardian.com)
- ^ wrongly disqualified (news.artnet.com)
- ^ falsely accused (andrewggibson.com)
- ^ are far from foolproof (www.zdnet.com)
- ^ they may never be entirely reliable (theconversation.com)
- ^ Recent discussions (www.rollingstone.com)
- ^ Google's SynthID is the latest tool for catching AI-made content. What is AI 'watermarking' and does it work? (theconversation.com)
- ^ replacing them (dl.acm.org)
- ^ tech companies using their labour (thecon.ai)
- ^ without consent, credit or compensation (theconversation.com)
- ^ threaten Indigenous inclusion (theconversation.com)
- ^ wrongly accused of using AI can suffer (andrewggibson.com)
- ^ deepfakes (www.bbc.co.uk)
- ^ linked to scams and dangerous misinformation (theconversation.com)
- ^ authentic content (www.bbc.com)
- ^ are fake (politicalsciencenow.com)
- ^ democracy and social cohesion (theconversation.com)
- ^ cultural and social technology (www.science.org)
- ^ fooled (neurosciencenews.com)
- ^ increasingly realistic (journals.sagepub.com)
- ^ transparency requirements (www.austlii.edu.au)
- ^ trust and distrust (dl.acm.org)
- ^ practical wisdom (plato.stanford.edu)
- ^ practical wisdom (global.oup.com)