Google AI
The Times Australia
The Times World News

.

AI systems and humans ‘see’ the world differently – and that’s why AI images look so garish

  • Written by: T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University

How do computers see the world? It’s not quite the same way humans do.

Recent advances in generative artificial intelligence (AI) make it possible to do more things with computer image processing. You might ask an AI tool to describe an image, for example, or to create an image from a description you provide.

As generative AI tools and services become more embedded in day-to-day life, knowing more about how computer vision compares to human vision is becoming essential.

My latest research[1], published in Visual Communication, uses AI-generated descriptions and images to get a sense of how AI models “see” – and discovered a bright, sensational world of generic images quite different from the human visual realm.

This image features a pixelated selfie featuring an individual with long brown hair and a fringe. The person has their tongue out and is smiling too. Most of the parts of the image are pixelated with red and yellow squares focusing on certain parts of the
Algorithms see in a very different way to humans. Elise Racine / Better Images of AI / Emotion: Joy, CC BY[2][3]

Comparing human and computer vision

Humans see when light waves enter our eyes through the iris, cornea and lens. Light is converted into electrical signals by a light-sensitive surface called the retina inside the eyeball, and then our brains interpret[4] these signals into images we see.

Our vision focuses on key aspects such as colour, shape, movement and depth. Our eyes let us detect changes in the environment and identify potential threats and hazards.

Computers work very differently. They process images by standardising them, inferring the context of an image through metadata (such as time and location information in an image file), and comparing images to other images they have previously learned about. Computers focus on things such as edges, corners or textures present in the image. They also look for patterns and try to classify objects.

A screenshot of a CAPTCHA test asking a user to select all images with a bus.
Solving CAPTCHAs helps prove you’re human and also helps computers learn how to ‘see’. CAPTCHA

You’ve likely helped computers learn how to “see” by completing online CAPTCHA tests[5].

These are typically used to help computers differentiate between humans and bots. But they’re also used to train and improve machine learning algorithms.

So, when you’re asked to “select all the images with a bus”, you’re helping software learn the difference between different types of vehicles as well as proving you’re human.

Exploring how computers ‘see’ differently

In my new research, I asked a large language model to describe two visually distinct sets of human-created images.

One set contained hand-drawn illustrations while the other was made up of camera-produced photographs.

A screenshot of several image thumbnails, some illustrations and some photos. Some of the nuances of algorithmic vision can be uncovered by asking an AI tool to describe images and then visualise those same descriptions. T.J. Thomson, Author provided (no reuse)

I fed the descriptions back into an AI tool and asked it to visualise what it had described. I then compared the original human-made images to the computer-generated ones.

The resulting descriptions noted the hand-drawn images were illustrations but didn’t mention the other images as being photographs or having a high level of realism. This suggests AI tools see photorealism as the default visual style, unless specifically prompted otherwise.

Cultural context was largely devoid from the descriptions. The AI tool either couldn’t or wouldn’t infer cultural context by the presence of, for example, Arabic or Hebrew writing in the images. This underscores the dominance of some languages, like English, in AI tools’ training data.

While colour is vital to human vision, it too was largely ignored in the AI tools’ image descriptions. Visual depth and perspective were also largely ignored.

The AI images were more boxy than the hand-drawn illustrations, which used more organic shapes.

Two similar but different black and white illustrations of a bookshelf on wheels. The AI-generated images were much more boxy than the hand-drawn illustrations, which used more organic shapes and had a different relationship between positive and negative space. Left: Medar de la Cruz; right: ChatGPT

The AI images were also much more saturated than the source images: they contained brighter, more vivid colours. This reveals the prevalence of stock photos, which tend to be more “contrasty”, in AI tools’ training data[6].

The AI images were also more sensationalist. A single car in the original image became one of a long column of cars in the AI version. AI seems to exaggerate details not just in text but also in visual form.

A photo of people with guns driving through a desert and a generated photorealistic image of several cars containing peopl with guns driving through a desert. The AI-generated images were more sensationalist and contrasty than the human-created photographs. Left: Ahmed Zakot; right: ChatGPT

The generic nature of the AI images means they can be used in many contexts and across countries. But the lack of specificity also means audiences might perceive them[7] as less authentic and engaging.

Deciding when to use human or computer vision

This research supports the notion that humans and computers “see” differently. Knowing when to rely on computer or human vision to describe or create images can be a competitive advantage.

While AI-generated images can be eye-catching, they can also come across as hollow upon closer inspection. This can limit their value.

Images are adept at sparking an emotional reaction and audiences might find human-created images that authentically reflect specific conditions as more engaging[8] than computer-generated attempts.

However, the capabilities of AI can make it an attractive option for quickly labelling large data sets and helping humans categorise them.

Ultimately, there’s a role for both human and AI vision. Knowing more about the opportunities and limits of each can help keep you safer, more productive, and better equipped to communicate in the digital age.

References

  1. ^ research (doi.org)
  2. ^ Elise Racine / Better Images of AI / Emotion: Joy (betterimagesofai.org)
  3. ^ CC BY (creativecommons.org)
  4. ^ brains interpret (www.taylorfrancis.com)
  5. ^ CAPTCHA tests (www.captcha.net)
  6. ^ training data (www.tandfonline.com)
  7. ^ audiences might perceive them (apo.org.au)
  8. ^ more engaging (apo.org.au)

Read more https://theconversation.com/ai-systems-and-humans-see-the-world-differently-and-thats-why-ai-images-look-so-garish-260178

Times Magazine

Federal Budget and Motoring: Luxury Car Tax, Fuel Excise and the Cost of Driving in Australia

For millions of Australians, the Federal Budget is not an abstract economic document discussed onl...

Buying a New Car: Insider Tips

Buying a new car is one of the largest purchases many Australians make outside buying a home. Yet ...

Hybrid Vehicles: What Is a Hybrid, an EV and a Plug-In Hybrid?

Australia’s car market is changing faster than at any point since the decline of the local Holden ...

Chinese Cars: If You Are Not Willing to Risk Buying One, What Are the Current Affordable Petrol Alternatives

For years Australian motorists shopping for an affordable new car generally looked toward familiar...

Australia’s East Coast Braces for Wet Week as Weather Pattern Shifts

Large sections of Australia’s east coast are preparing for a significant period of wet weather as ...

A Report From France: The Mood of a Nation

France occupies a unique place in the global imagination. To many outsiders, it remains the land ...

The Times Features

Restaurants Are Packed Again — So Why Are Australians S…

Australians still love dining out. Despite years of inflation, rising interest rates, higher rents...

Real Estate and the Federal Budget: Early Signs Emergin…

Australia’s federal budget has landed, and while economists, investors and political strategists c...

The Modern Causes of Back Pain and What You Can Do

Key Highlights Modern lifestyles are a major contributor to ongoing back painPosture, movement, a...

What to Know About Adding Natural Oils to Your Wellness…

Key Highlights Natural oils are commonly used to support everyday wellbeingConsistency and qualit...

How Online Mental Health Support Is Changing Access to …

Key Highlights Online mental health services are improving accessibility for many individualsFlex...

Why every drop counts

Accurate water measurement and confidence in Sustainable Diversion Limits (SDLs) are essential to ...

Dining Out Is Expensive. Buying High Quality Meat and F…

For many Australians, dining out has quietly shifted from a weekly habit to an occasional indulgen...

REFLECTIONS: A Legacy in the Rain at Carla Zampatti AFW…

Words & Photography by Cesar Ocampo There is a specific kind of magic that happens when high fa...

Where Our Batteries Come From: Battery making is big bu…

Batteries are now so deeply embedded in modern life that most people rarely stop to think about th...