The Times Australia
The Times Australia
.

How the failures of AI can reveal its inner workings

  • Written by Daniel Binns, Senior Lecturer, Media & Communication, RMIT University

Some say it’s em dashes, dodgy apostrophes, or too many emoji. Others suggest that maybe the word “delve” is a chatbot’s calling card[1]. It’s no longer the sight of morphed bodies or too many fingers, but it might be something just a little off in the background. Or video content[2] that feels a little too real.

The markers of AI-generated media are becoming harder to spot as technology[3] companies[4] work[5] to iron out the kinks in their generative artificial intelligence (AI) models.

But what if instead of trying to detect and avoid these glitches, we deliberately encouraged them instead? The flaws, failures and unexpected outputs of AI systems can reveal more about how these technologies actually work than the polished, successful outputs they produce.

When AI hallucinates[6], contradicts itself, or produces something beautifully broken, it reveals its training biases, decision-making processes, and the gaps between how it appears to “think” and how it actually processes information.

In my work as a researcher and educator, I’ve found that deliberately “breaking” AI – pushing it beyond its intended functions through creative misuse – offers a form of AI literacy. I argue we can’t truly understand these systems without experimenting with them.

Welcome to the Slopocene

We’re currently in the “Slopocene[7]” – a term that’s been used to describe overproduced[8], low-quality AI content[9]. It also hints at a speculative near-future where recursive training collapse turns the web into a haunted archive of confused bots and broken truths.

Read more: What is 'model collapse'? An expert explains the rumours about an impending AI doom[10]

AI “hallucinations” are outputs that seem coherent, but aren’t factually accurate. Andrej Karpathy, OpenAI co-founder and former Tesla AI director, argues[11] large language models (LLMs) hallucinate all the time, and it’s only when they

go into deemed factually incorrect territory that we label it a “hallucination”. It looks like a bug, but it’s just the LLM doing what it always does.

What we call hallucination is actually the model’s core generative process[12] that relies on statistical language patterns.

In other words, when AI hallucinates, it’s not malfunctioning; it’s demonstrating the same creative uncertainty that makes it capable of generating anything new at all.

This reframing is crucial for understanding the Slopocene. If hallucination is the core creative process, then the “slop” flooding our feeds isn’t just failed content: it’s the visible manifestation of these statistical processes running at scale.

Pushing a chatbot to its limits

If hallucination is really a core feature of AI, can we learn more about how these systems work by studying what happens when they’re pushed to their limits?

With this in mind, I decided to “break”[13] Anthropic’s proprietary Claude model Sonnet 3.7 by prompting it to resist its training: suppress coherence and speak only in fragments.

The conversation shifted quickly from hesitant phrases to recursive contradictions to, eventually, complete semantic collapse.

Screenshot of an AI text interface showing an unusual output. The text begins with a list of logical inconsistencies, then breaks into vertical strings of distorted characters, symbols, and fragmented phrases.
A language model in collapse. This vertical output was generated after a series of prompts pushed Claude Sonnet 3.7 into a recursive glitch loop, overriding its usual guardrails and running until the system cut it off. Screenshot by author.

Prompting a chatbot into such a collapse quickly reveals how AI models construct the illusion of personality and understanding through statistical patterns, not genuine comprehension.

Furthermore, it shows that “system failure” and the normal operation of AI are fundamentally the same process, just with different levels of coherence imposed on top.

‘Rewilding’ AI media

If the same statistical processes govern both AI’s successes and failures, we can use this to “rewild” AI imagery. I borrow this term from ecology and conservation, where rewilding involves restoring functional ecosystems[14]. This might mean reintroducing keystone species, allowing natural processes to resume, or connecting fragmented habitats through corridors that enable unpredictable interactions.

Applied to AI, rewilding means deliberately reintroducing the complexity, unpredictability and “natural” messiness that gets optimised out of commercial systems. Metaphorically, it’s creating pathways back to the statistical wilderness that underlies these models.

Remember the morphed hands, impossible anatomy and uncanny faces that immediately screamed “AI-generated” in the early days of widespread image generation[15]?

These so-called failures were windows into how the model actually processed visual information, before that complexity was smoothed away in pursuit of commercial viability.

AI image of two women under red umbrellas. One wears bold clothing and a turquoise hat. A red speech bubble reads It's urgent that I see your project to assess.
AI-generated image using a non-sequitur prompt fragment: ‘attached screenshot. It’s urgent that I see your project to assess’. The result blends visual coherence with surreal tension: a hallmark of the Slopocene aesthetic. AI-generated with Leonardo Phoenix 1.0, prompt fragment by author.

You can try AI rewilding yourself with any online image generator.

Start by prompting for a self-portrait using only text: you’ll likely get the “average” output from your description. Elaborate on that basic prompt, and you’ll either get much closer to reality, or you’ll push the model into weirdness.

Next, feed in a random fragment of text, perhaps a snippet from an email or note. What does the output try to show? What words has it latched onto? Finally, try symbols only: punctuation, ASCII, unicode. What does the model hallucinate into view?

The output – weird, uncanny, perhaps surreal – can help reveal the hidden associations between text and visuals that are embedded within the models.

Insight through misuse

Creative AI misuse offers three concrete benefits.

First, it reveals bias and limitations in ways normal usage masks: you can uncover what a model “sees” when it can’t rely on conventional logic.

Second, it teaches us about AI decision-making by forcing models to show their work when they’re confused.

Third, it builds critical AI literacy by demystifying these systems through hands-on experimentation. Critical AI literacy provides methods for diagnostic experimentation, such as testing – and often misusing – AI to understand its statistical patterns and decision-making processes.

These skills become more urgent as AI systems grow more sophisticated and ubiquitous. They’re being integrated in everything from search to social media to creative software.

When someone generates an image, writes with AI assistance or relies on algorithmic recommendations, they’re entering a collaborative relationship with a system that has particular biases, capabilities and blind spots.

Rather than mindlessly adopting or reflexively rejecting these tools, we can develop critical AI literacy by exploring the Slopocene and witnessing what happens when AI tools “break”.

This isn’t about becoming more efficient AI users. It’s about maintaining agency in relationships with systems designed to be persuasive, predictive and opaque.

References

  1. ^ calling card (www.afr.com)
  2. ^ video content (www.theverge.com)
  3. ^ technology (decrypt.co)
  4. ^ companies (arstechnica.com)
  5. ^ work (blog.adobe.com)
  6. ^ hallucinates (theconversation.com)
  7. ^ Slopocene (x.com)
  8. ^ overproduced (theconversation.com)
  9. ^ low-quality AI content (theconversation.com)
  10. ^ What is 'model collapse'? An expert explains the rumours about an impending AI doom (theconversation.com)
  11. ^ argues (x.com)
  12. ^ core generative process (link.springer.com)
  13. ^ I decided to “break” (clockworkpenguin.net)
  14. ^ restoring functional ecosystems (theconversation.com)
  15. ^ in the early days of widespread image generation (theconversation.com)

Read more https://theconversation.com/understanding-the-slopocene-how-the-failures-of-ai-can-reveal-its-inner-workings-258584

Many Australians secretly use AI at work, a new report shows. Clearer rules could reduce ‘shadow AI’

Australian workers are secretly using generative artificial intelligence (Gen AI) tools – without knowledge or...

Times Magazine

DIY Is In: How Aussie Parents Are Redefining Birthday Parties

When planning his daughter’s birthday, Rich opted for a DIY approach, inspired by her love for drawing maps and giving clues. Their weekend tradition of hiding treats at home sparked the idea, and with a pirate ship playground already chosen as t...

When Touchscreens Turn Temperamental: What to Do Before You Panic

When your touchscreen starts acting up, ignoring taps, registering phantom touches, or freezing entirely, it can feel like your entire setup is falling apart. Before you rush to replace the device, it’s worth taking a deep breath and exploring what c...

Why Social Media Marketing Matters for Businesses in Australia

Today social media is a big part of daily life. All over Australia people use Facebook, Instagram, TikTok , LinkedIn and Twitter to stay connected, share updates and find new ideas. For businesses this means a great chance to reach new customers and...

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

Data Management Isn't Just About Tech—Here’s Why It’s a Human Problem Too

Photo by Kevin Kuby Manuel O. Diaz Jr.We live in a world drowning in data. Every click, swipe, medical scan, and financial transaction generates information, so much that managing it all has become one of the biggest challenges of our digital age. Bu...

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Times Features

What to Expect During a Professional Termite Inspection

Keeping a home safe from termites isn't just about peace of mind—it’s a vital investment in the structure of your property. A professional termite inspection is your first line o...

Booty and the Beasts - The Podcast

Cult TV Show Back with Bite as a Riotous New Podcast  The show that scandalised, shocked and entertained audiences across the country, ‘Beauty and the Beast’, has returned in ...

A Guide to Determining the Right Time for a Switchboard Replacement

At the centre of every property’s electrical system is the switchboard – a component that doesn’t get much attention until problems arise. This essential unit directs electrici...

Après Skrew: Peanut Butter Whiskey Turns Australia’s Winter Parties Upside Down

This August, winter in Australia is about to get a lot nuttier. Skrewball Whiskey, the cult U.S. peanut butter whiskey that’s taken the world by storm, is bringing its bold brand o...

450 people queue for first taste of Pappa Flock’s crispy chicken as first restaurant opens in Queensland

Queenslanders turned out in flocks for the opening of Pappa Flock's first Queensland restaurant, with 450 people lining up to get their hands on the TikTok famous crispy crunchy ch...

How to Choose a Cosmetic Clinic That Aligns With Your Aesthetic Goals

Clinics that align with your goals prioritise subtlety, safety, and client input Strong results come from experience, not trends or treatment bundles A proper consultation fe...