The Times Australia
Fisher and Paykel Appliances
The Times World News

.

AI systems and humans ‘see’ the world differently – and that’s why AI images look so garish

  • Written by T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University

How do computers see the world? It’s not quite the same way humans do.

Recent advances in generative artificial intelligence (AI) make it possible to do more things with computer image processing. You might ask an AI tool to describe an image, for example, or to create an image from a description you provide.

As generative AI tools and services become more embedded in day-to-day life, knowing more about how computer vision compares to human vision is becoming essential.

My latest research[1], published in Visual Communication, uses AI-generated descriptions and images to get a sense of how AI models “see” – and discovered a bright, sensational world of generic images quite different from the human visual realm.

This image features a pixelated selfie featuring an individual with long brown hair and a fringe. The person has their tongue out and is smiling too. Most of the parts of the image are pixelated with red and yellow squares focusing on certain parts of the
Algorithms see in a very different way to humans. Elise Racine / Better Images of AI / Emotion: Joy, CC BY[2][3]

Comparing human and computer vision

Humans see when light waves enter our eyes through the iris, cornea and lens. Light is converted into electrical signals by a light-sensitive surface called the retina inside the eyeball, and then our brains interpret[4] these signals into images we see.

Our vision focuses on key aspects such as colour, shape, movement and depth. Our eyes let us detect changes in the environment and identify potential threats and hazards.

Computers work very differently. They process images by standardising them, inferring the context of an image through metadata (such as time and location information in an image file), and comparing images to other images they have previously learned about. Computers focus on things such as edges, corners or textures present in the image. They also look for patterns and try to classify objects.

A screenshot of a CAPTCHA test asking a user to select all images with a bus.
Solving CAPTCHAs helps prove you’re human and also helps computers learn how to ‘see’. CAPTCHA

You’ve likely helped computers learn how to “see” by completing online CAPTCHA tests[5].

These are typically used to help computers differentiate between humans and bots. But they’re also used to train and improve machine learning algorithms.

So, when you’re asked to “select all the images with a bus”, you’re helping software learn the difference between different types of vehicles as well as proving you’re human.

Exploring how computers ‘see’ differently

In my new research, I asked a large language model to describe two visually distinct sets of human-created images.

One set contained hand-drawn illustrations while the other was made up of camera-produced photographs.

A screenshot of several image thumbnails, some illustrations and some photos. Some of the nuances of algorithmic vision can be uncovered by asking an AI tool to describe images and then visualise those same descriptions. T.J. Thomson, Author provided (no reuse)

I fed the descriptions back into an AI tool and asked it to visualise what it had described. I then compared the original human-made images to the computer-generated ones.

The resulting descriptions noted the hand-drawn images were illustrations but didn’t mention the other images as being photographs or having a high level of realism. This suggests AI tools see photorealism as the default visual style, unless specifically prompted otherwise.

Cultural context was largely devoid from the descriptions. The AI tool either couldn’t or wouldn’t infer cultural context by the presence of, for example, Arabic or Hebrew writing in the images. This underscores the dominance of some languages, like English, in AI tools’ training data.

While colour is vital to human vision, it too was largely ignored in the AI tools’ image descriptions. Visual depth and perspective were also largely ignored.

The AI images were more boxy than the hand-drawn illustrations, which used more organic shapes.

Two similar but different black and white illustrations of a bookshelf on wheels. The AI-generated images were much more boxy than the hand-drawn illustrations, which used more organic shapes and had a different relationship between positive and negative space. Left: Medar de la Cruz; right: ChatGPT

The AI images were also much more saturated than the source images: they contained brighter, more vivid colours. This reveals the prevalence of stock photos, which tend to be more “contrasty”, in AI tools’ training data[6].

The AI images were also more sensationalist. A single car in the original image became one of a long column of cars in the AI version. AI seems to exaggerate details not just in text but also in visual form.

A photo of people with guns driving through a desert and a generated photorealistic image of several cars containing peopl with guns driving through a desert. The AI-generated images were more sensationalist and contrasty than the human-created photographs. Left: Ahmed Zakot; right: ChatGPT

The generic nature of the AI images means they can be used in many contexts and across countries. But the lack of specificity also means audiences might perceive them[7] as less authentic and engaging.

Deciding when to use human or computer vision

This research supports the notion that humans and computers “see” differently. Knowing when to rely on computer or human vision to describe or create images can be a competitive advantage.

While AI-generated images can be eye-catching, they can also come across as hollow upon closer inspection. This can limit their value.

Images are adept at sparking an emotional reaction and audiences might find human-created images that authentically reflect specific conditions as more engaging[8] than computer-generated attempts.

However, the capabilities of AI can make it an attractive option for quickly labelling large data sets and helping humans categorise them.

Ultimately, there’s a role for both human and AI vision. Knowing more about the opportunities and limits of each can help keep you safer, more productive, and better equipped to communicate in the digital age.

References

  1. ^ research (doi.org)
  2. ^ Elise Racine / Better Images of AI / Emotion: Joy (betterimagesofai.org)
  3. ^ CC BY (creativecommons.org)
  4. ^ brains interpret (www.taylorfrancis.com)
  5. ^ CAPTCHA tests (www.captcha.net)
  6. ^ training data (www.tandfonline.com)
  7. ^ audiences might perceive them (apo.org.au)
  8. ^ more engaging (apo.org.au)

Read more https://theconversation.com/ai-systems-and-humans-see-the-world-differently-and-thats-why-ai-images-look-so-garish-260178

Active Wear

Times Magazine

Kindness Tops the List: New Survey Reveals Australia’s Defining Value

Commentary from Kath Koschel, founder of Kindness Factory.  In a time where headlines are dominat...

In 2024, the climate crisis worsened in all ways. But we can still limit warming with bold action

Climate change has been on the world’s radar for decades[1]. Predictions made by scientists at...

End-of-Life Planning: Why Talking About Death With Family Makes Funeral Planning Easier

I spend a lot of time talking about death. Not in a morbid, gloomy way—but in the same way we d...

YepAI Joins Victoria's AI Trade Mission to Singapore for Big Data & AI World Asia 2025

YepAI, a Melbourne-based leader in enterprise artificial intelligence solutions, announced today...

Building a Strong Online Presence with Katoomba Web Design

Katoomba web design is more than just creating a website that looks good—it’s about building an onli...

September Sunset Polo

International Polo Tour To Bridge Historic Sport, Life-Changing Philanthropy, and Breath-Taking Beau...

The Times Features

Pharmac wants to trim its controversial medicines waiting list – no list at all might be better

New Zealand’s drug-buying agency Pharmac is currently consulting[1] on a change to how it mana...

NRMA Partnership Unlocks Cinema and Hotel Discounts

My NRMA Rewards, one of Australia’s largest membership and benefits programs, has announced a ne...

Restaurants to visit in St Kilda and South Yarra

Here are six highly-recommended restaurants split between the seaside suburb of St Kilda and the...

The Year of Actually Doing It

There’s something about the week between Christmas and New Year’s that makes us all pause and re...

Jetstar to start flying Sunshine Coast to Singapore Via Bali With Prices Starting At $199

The Sunshine Coast is set to make history, with Jetstar today announcing the launch of direct fl...

Why Melbourne Families Are Choosing Custom Home Builders Over Volume Builders

Across Melbourne’s growing suburbs, families are re-evaluating how they build their dream homes...

Australian Startup Business Operators Should Make Connections with Asian Enterprises — That Is Where Their Future Lies

In the rapidly shifting global economy, Australian startups are increasingly finding that their ...

How early is too early’ for Hot Cross Buns to hit supermarket and bakery shelves

Every year, Australians find themselves in the middle of the nation’s most delicious dilemmas - ...

Ovarian cancer community rallied Parliament

The fight against ovarian cancer took centre stage at Parliament House in Canberra last week as th...