The Times Australia
Mirvac Harbourside
The Times World News

.

AI systems and humans ‘see’ the world differently – and that’s why AI images look so garish

  • Written by T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University

How do computers see the world? It’s not quite the same way humans do.

Recent advances in generative artificial intelligence (AI) make it possible to do more things with computer image processing. You might ask an AI tool to describe an image, for example, or to create an image from a description you provide.

As generative AI tools and services become more embedded in day-to-day life, knowing more about how computer vision compares to human vision is becoming essential.

My latest research[1], published in Visual Communication, uses AI-generated descriptions and images to get a sense of how AI models “see” – and discovered a bright, sensational world of generic images quite different from the human visual realm.

This image features a pixelated selfie featuring an individual with long brown hair and a fringe. The person has their tongue out and is smiling too. Most of the parts of the image are pixelated with red and yellow squares focusing on certain parts of the
Algorithms see in a very different way to humans. Elise Racine / Better Images of AI / Emotion: Joy, CC BY[2][3]

Comparing human and computer vision

Humans see when light waves enter our eyes through the iris, cornea and lens. Light is converted into electrical signals by a light-sensitive surface called the retina inside the eyeball, and then our brains interpret[4] these signals into images we see.

Our vision focuses on key aspects such as colour, shape, movement and depth. Our eyes let us detect changes in the environment and identify potential threats and hazards.

Computers work very differently. They process images by standardising them, inferring the context of an image through metadata (such as time and location information in an image file), and comparing images to other images they have previously learned about. Computers focus on things such as edges, corners or textures present in the image. They also look for patterns and try to classify objects.

A screenshot of a CAPTCHA test asking a user to select all images with a bus.
Solving CAPTCHAs helps prove you’re human and also helps computers learn how to ‘see’. CAPTCHA

You’ve likely helped computers learn how to “see” by completing online CAPTCHA tests[5].

These are typically used to help computers differentiate between humans and bots. But they’re also used to train and improve machine learning algorithms.

So, when you’re asked to “select all the images with a bus”, you’re helping software learn the difference between different types of vehicles as well as proving you’re human.

Exploring how computers ‘see’ differently

In my new research, I asked a large language model to describe two visually distinct sets of human-created images.

One set contained hand-drawn illustrations while the other was made up of camera-produced photographs.

A screenshot of several image thumbnails, some illustrations and some photos. Some of the nuances of algorithmic vision can be uncovered by asking an AI tool to describe images and then visualise those same descriptions. T.J. Thomson, Author provided (no reuse)

I fed the descriptions back into an AI tool and asked it to visualise what it had described. I then compared the original human-made images to the computer-generated ones.

The resulting descriptions noted the hand-drawn images were illustrations but didn’t mention the other images as being photographs or having a high level of realism. This suggests AI tools see photorealism as the default visual style, unless specifically prompted otherwise.

Cultural context was largely devoid from the descriptions. The AI tool either couldn’t or wouldn’t infer cultural context by the presence of, for example, Arabic or Hebrew writing in the images. This underscores the dominance of some languages, like English, in AI tools’ training data.

While colour is vital to human vision, it too was largely ignored in the AI tools’ image descriptions. Visual depth and perspective were also largely ignored.

The AI images were more boxy than the hand-drawn illustrations, which used more organic shapes.

Two similar but different black and white illustrations of a bookshelf on wheels. The AI-generated images were much more boxy than the hand-drawn illustrations, which used more organic shapes and had a different relationship between positive and negative space. Left: Medar de la Cruz; right: ChatGPT

The AI images were also much more saturated than the source images: they contained brighter, more vivid colours. This reveals the prevalence of stock photos, which tend to be more “contrasty”, in AI tools’ training data[6].

The AI images were also more sensationalist. A single car in the original image became one of a long column of cars in the AI version. AI seems to exaggerate details not just in text but also in visual form.

A photo of people with guns driving through a desert and a generated photorealistic image of several cars containing peopl with guns driving through a desert. The AI-generated images were more sensationalist and contrasty than the human-created photographs. Left: Ahmed Zakot; right: ChatGPT

The generic nature of the AI images means they can be used in many contexts and across countries. But the lack of specificity also means audiences might perceive them[7] as less authentic and engaging.

Deciding when to use human or computer vision

This research supports the notion that humans and computers “see” differently. Knowing when to rely on computer or human vision to describe or create images can be a competitive advantage.

While AI-generated images can be eye-catching, they can also come across as hollow upon closer inspection. This can limit their value.

Images are adept at sparking an emotional reaction and audiences might find human-created images that authentically reflect specific conditions as more engaging[8] than computer-generated attempts.

However, the capabilities of AI can make it an attractive option for quickly labelling large data sets and helping humans categorise them.

Ultimately, there’s a role for both human and AI vision. Knowing more about the opportunities and limits of each can help keep you safer, more productive, and better equipped to communicate in the digital age.

References

  1. ^ research (doi.org)
  2. ^ Elise Racine / Better Images of AI / Emotion: Joy (betterimagesofai.org)
  3. ^ CC BY (creativecommons.org)
  4. ^ brains interpret (www.taylorfrancis.com)
  5. ^ CAPTCHA tests (www.captcha.net)
  6. ^ training data (www.tandfonline.com)
  7. ^ audiences might perceive them (apo.org.au)
  8. ^ more engaging (apo.org.au)

Read more https://theconversation.com/ai-systems-and-humans-see-the-world-differently-and-thats-why-ai-images-look-so-garish-260178

Mirvac Harbourside

Times Magazine

YepAI Joins Victoria's AI Trade Mission to Singapore for Big Data & AI World Asia 2025

YepAI, a Melbourne-based leader in enterprise artificial intelligence solutions, announced today...

Building a Strong Online Presence with Katoomba Web Design

Katoomba web design is more than just creating a website that looks good—it’s about building an onli...

September Sunset Polo

International Polo Tour To Bridge Historic Sport, Life-Changing Philanthropy, and Breath-Taking Beau...

5 Ways Microsoft Fabric Simplifies Your Data Analytics Workflow

In today's data-driven world, businesses are constantly seeking ways to streamline their data anal...

7 Questions to Ask Before You Sign IT Support Companies in Sydney

Choosing an IT partner can feel like buying an insurance policy you hope you never need. The right c...

Choosing the Right Legal Aid Lawyer in Sutherland Shire: Key Considerations

Legal aid services play an essential role in ensuring access to justice for all. For people in t...

The Times Features

Macquarie Bank Democratises Agentic AI, Scaling Customer Innovation with Gemini Enterprise

Macquarie’s Banking and Financial Services group (Macquarie Bank), in collaboration with Google ...

Do kids really need vitamin supplements?

Walk down the health aisle of any supermarket and you’ll see shelves lined with brightly packa...

Why is it so shameful to have missing or damaged teeth?

When your teeth and gums are in good condition, you might not even notice their impact on your...

Australian travellers at risk of ATM fee rip-offs according to new data from Wise

Wise, the global technology company building the smartest way to spend and manage money internat...

Does ‘fasted’ cardio help you lose weight? Here’s the science

Every few years, the concept of fasted exercise training pops up all over social media. Faste...

How Music and Culture Are Shaping Family Road Trips in Australia

School holiday season is here, and Aussies aren’t just hitting the road - they’re following the musi...

The Role of Spinal Physiotherapy in Recovery and Long-Term Wellbeing

Back pain and spinal conditions are among the most common reasons people seek medical support, oft...

Italian Lamb Ragu Recipe: The Best Ragù di Agnello for Pasta

Ciao! It’s Friday night, and the weekend is calling for a little Italian magic. What’s better than t...

It’s OK to use paracetamol in pregnancy. Here’s what the science says about the link with autism

United States President Donald Trump has urged pregnant women[1] to avoid paracetamol except in ...