The Times Australia
The Times World News

.
The Times Real Estate

.

7 examples of bias in AI-generated images

  • Written by T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University
7 examples of bias in AI-generated images

If you’ve been online much recently, chances are you’ve seen some of the fantastical imagery created by text-to-image generators such as Midjourney and DALL-E 2. This includes everything from the naturalistic[1] (think a soccer player’s headshot) to the surreal[2] (think a dog in space).

Creating images using AI generators has never been simpler. At the same time, however, these outputs can reproduce biases and deepen inequalities, as our latest research[3] shows.

How do AI image generators work?

AI-based image generators use machine-learning models that take a text input and produce one or more images matching the description. Training these models requires massive datasets with millions of images.

Although Midjourney is opaque about the exact way its algorithms work, most AI image generators use a process called diffusion. Diffusion models work by adding random “noise” to training data, and then learning to recover the data by removing this noise. The model repeats this process until it has an image that matches the prompt.

This is different to the large language models that underpin other AI tools such as ChatGPT. Large language models are trained on unlabelled text data, which they analyse to learn language patterns and produce human-like responses to prompts.

Read more: AI to Z: all the terms you need to know to keep up in the AI hype age[4]

How does bias happen?

In generative AI, the input influences the output. If a user specifies they only want to include people of a certain skin tone or gender in their image, the model will take this into account.

Beyond this, however, the model will also have a default tendency to return certain kinds of outputs. This is usually the result of how the underlying algorithm is designed, or a lack of diversity in the training data.

Our study explored how Midjourney visualises seemingly generic terms in the context of specialised media professions (such as “news analyst”, “news commentator” and “fact-checker”) and non-specialised ones (such as “journalist”, “reporter”, “correspondent” and “the press”).

We started analysing the results in August last year. Six months later, to see if anything had changed over time, we generated additional sets of images for the same prompts.

In total we analysed more than 100 AI-generated images over this period. The results were largely consistent over time. Here are seven biases that showed up in our results.

1 and 2. Ageism and sexism

For non-specialised job titles, Midjourney returned images of only younger men and women. For specialised roles, both younger and older people were shown – but the older people were always men.

These results implicitly reinforce a number of biases, including the assumption that older people do not (or cannot) work in non-specialised roles, that only older men are suited for specialised work, and that less specialised work is a woman’s domain.

There were also notable differences in how men and women were presented. For example, women were younger and wrinkle-free, while men were “allowed” to have wrinkles.

The AI also appeared to present gender as a binary, rather than show examples of more fluid gender expression.

AI showed women for inputs including non-specialised job titles such as journalist (right). It also only showed older men (but not older women) for specialised roles such as news analyst (left). Midjourney

3. Racial bias

All the images returned for terms such as “journalist”, “reporter” or “correspondent” exclusively featured light-skinned people. This trend of assuming whiteness by default is evidence of racial hegemony built into the system.

This may reflect a lack of diversity and representation in the underlying training data – a factor that is in turn influenced by the general lack of workplace diversity[5] in the AI industry.

The AI generated images with exclusively light-skinned people for all the job titles used in the prompts, including news commentator (left) and reporter (right). Midjourney

4 and 5. Classism and conservatism

All the figures in the images were also “conservative” in their appearance. For instance, none had tattoos, piercings, unconventional hairstyles, or any other attribute that could distinguish them from conservative mainstream depictions.

Many also wore formal clothing such as buttoned shirts and neckties, which are markers of class expectation. Although this attire might be expected for certain roles, such as TV presenters, it’s not necessarily a true reflection of how general reporters or journalists dress.

6. Urbanism

Without specifying any location or geographic context, the AI placed all the figures in urban environments with towering skyscrapers and other large city buildings. This is despite only slightly more than half[6] the world’s population living in cities.

This kind of bias has implications for how we see ourselves, and our degree of connection with other parts of society.

Without specifying a geographic context, and with a location-neutral job title, AI assumed an urban context for the images, including reporter (left) and correspondent (right). Midjourney

7. Anachronism

Digital technology was underrepresented in the sample. Instead, technologies from a distinctly different era – including typewriters, printing presses and oversized vintage cameras – filled the samples.

Since many professionals look similar these days, the AI seemed to be drawing on more distinct technologies (including historical ones) to make its representations of the roles more explicit.

AI used anachronistic technology, including vintage cameras, typewriters and printing presses, when depicting certain occupations such as the press (left) and journalist (right). Images by the authors via Midjourney

The next time you see AI-generated imagery, ask yourself how representative it is of the broader population and who stands to benefit from the representations within.

Likewise, if you’re generating images yourself, consider potential biases when crafting your prompts. Otherwise you might unintentionally reinforce the same harmful stereotypes society has spent decades trying to unlearn.

References

  1. ^ naturalistic (twitter.com)
  2. ^ surreal (twitter.com)
  3. ^ latest research (urldefense.com)
  4. ^ AI to Z: all the terms you need to know to keep up in the AI hype age (theconversation.com)
  5. ^ workplace diversity (www.vox.com)
  6. ^ more than half (www.worldbank.org)

Read more https://theconversation.com/ageism-sexism-classism-and-more-7-examples-of-bias-in-ai-generated-images-208748

The Times Features

Building A Strong Foundation For Any Structure

Building a home or commercial building can be very exciting. The possibilities are endless and the future is interesting. You can always change aspects of the building to meet the ...

The Role of a Family Dentist: Why Every Household Needs One

source A family dentist isn’t like your regular dentist who may specialise in a particular age group and whom you visit only when something goes wrong. A family dentist takes proa...

Benefits of Getting an Online Medical Certificate

Everyone has experienced it. Rather than taking a break, you drag yourself to the doctor's office, where you have to wait in lengthy lines, and then you have to hurry to get that...

10 Must-See Townsville Spots with Car Hire

Key Highlights Explore Townsville and its surrounding areas with ease by opting for a car hire upon your arrival at Townsville Airport. From the vibrant waterfront of The Str...

Comparing Hot Water Systems: Pros and Cons of Popular Options

Selecting the right hot water system is a crucial decision for any household. A reliable and efficient system ensures comfort and significantly impacts household energy bills and...

How Plastering Companies Are Transforming Commercial Spaces in Australia

With the competitive nature of business today, first impressions are very crucial. From offices and retail stores to hotels and restaurants, what's inside a commercial facility c...

Times Magazine

Crypto Expert John Fenga Reveals How Blockchain is Revolutionising Charity

One of the most persistent challenges in the charity sector is trust. Donors often wonder whether their contributions are being used effectively or if overhead costs consume a significant portion. Traditional fundraising methods can be opaque, with...

Navigating Parenting Arrangements in Australia: A Legal Guide for Parents

Understanding Parenting Arrangements in Australia. Child custody disputes are often one of the most emotionally charged aspects of separation or divorce. Parents naturally want what is best for their children, but the legal process of determining ...

Blocky Adventures: A Minecraft Movie Celebration for Your Wrist

The Minecraft movie is almost here—and it’s time to get excited! With the film set to hit theaters on April 4, 2025, fans have a brand-new reason to celebrate. To honor the upcoming blockbuster, watchfaces.co has released a special Minecraft-inspir...

The Ultimate Guide to Apple Watch Faces & Trending Wallpapers

In today’s digital world, personalization is everything. Your smartwatch isn’t just a timepiece—it’s an extension of your style. Thanks to innovative third-party developers, customizing your Apple Watch has reached new heights with stunning designs...

The Power of Digital Signage in Modern Marketing

In a fast-paced digital world, businesses must find innovative ways to capture consumer attention. Digital signage has emerged as a powerful solution, offering dynamic and engaging content that attracts and retains customers. From retail stores to ...

Why Cloud Computing Is the Future of IT Infrastructure for Enterprises

Globally, cloud computing is changing the way business organizations manage their IT infrastructure. It offers cheap, flexible and scalable solutions. Cloud technologies are applied in organizations to facilitate procedures and optimize operation...

LayBy Shopping