The Times Australia
The Times World News

.

Can you spot the AI impostors? We found AI faces can look more real than actual humans

  • Written by Amy Dawel, Clinical psychologist and Lecturer, Research School of Psychology, Australian National University
Can you spot the AI impostors? We found AI faces can look more real than actual humans

Does ChatGPT ever give you the eerie sense you’re interacting with another human being?

Artificial intelligence (AI) has reached an astounding level of realism, to the point that some tools can even fool people[1] into thinking they are interacting with another human[2].

The eeriness doesn’t stop there. In a study published today[3] in Psychological Science, we’ve discovered images of white faces generated by the popular StyleGAN2 algorithm[4] look more “human” than actual people’s faces.

AI creates hyperrealistic faces

For our research, we showed 124 participants pictures of many different white faces and asked them to decide whether each face was real or generated by AI.

Half the pictures were of real faces, while half were AI-generated. If the participants had guessed randomly, we would expect them to be correct about half the time – akin to flipping a coin and getting tails half the time.

Instead, participants were systematically wrong, and were more likely to say AI-generated faces were real. On average, people labelled about 2 out of 3 of the AI-generated faces as human.

These results suggest AI-generated faces look more real than actual faces; we call this effect “hyperrealism”. They also suggest people, on average, aren’t very good at detecting AI-generated faces. You can compare for yourself the portraits of real people at the top of the page with the ones embedded below.

But perhaps people are aware of their own limitations, and therefore aren’t likely to fall prey to AI-generated faces online?

To find out, we asked participants how confident they felt about their decisions. Paradoxically, the people who were the worst at identifying AI impostors were the most confident in their guesses.

In other words, the people who were most susceptible to being tricked by AI weren’t even aware they were being deceived.

Read more: Scams, deepfake porn and romance bots: advanced AI is exciting, but incredibly dangerous in criminals' hands[5]

Biased training data deliver biased outputs

The fourth industrial revolution[6] – which includes technologies such as AI, robotics and advanced computing – has profoundly changed the kinds of “faces” we see online.

AI-generated faces are readily available, and their use comes with both risks and benefits. Although they have been used to help find missing children[7], they have also been used in identity fraud[8], catfishing[9] and cyber warfare[10].

People’s misplaced confidence in their ability to detect AI faces could make them more susceptible to deceptive practices. They may, for instance, readily hand over sensitive information to cybercriminals masquerading behind hyperrealistic AI identities.

Another worrying aspect of AI hyperrealism is that it’s racially biased. Using data from another study[11] which also tested Asian and Black faces, we found only white AI-generated faces looked hyperreal.

When asked to decide whether faces of colour were human or AI-generated, participants guessed correctly about half the time – akin to guessing randomly.

This means white AI-generated faces look more real than AI-generated faces of colour, as well as white human faces.

Implications of bias and hyperrealistic AI

This racial bias likely stems from the fact that AI algorithms, including the one we tested, are often trained on images of mostly white faces.

Racial bias in algorithmic training can have serious implications. One recent study found self-driving cars are less likely to detect Black people[12], placing them at greater risk than white people. Both the companies producing AI, and the governments overseeing them, have a responsibility to ensure diverse representation and mitigate bias in AI.

The realism of AI-generated content also raises questions about our ability to accurately detect it and protect ourselves.

In our research, we identified several features that make white AI faces look hyperreal. For instance, they often have proportionate and familiar features, and they lack distinctive characteristics that make them stand out as “odd” from other faces. Participants misinterpreted these features as signs of “humanness”, leading to the hyperrealism effect.

At the same time, AI technology is advancing so rapidly it will be interesting to see how long these findings apply. There’s also no guarantee AI faces generated by other algorithms will differ from human faces in the same ways as those we tested.

Since our study was published, we have also tested the ability of AI detection technology to identify our AI faces. Although this technology claims to identify the particular type of AI faces we used with a high accuracy, it performed as poorly as our human participants.

Similarly, software for detecting AI writing has also had high rates of falsely accusing people of cheating[13] – especially people whose native language is not English.

Managing the risks of AI

So how can people protect themselves from misidentifying AI-generated content as real?

One way is to simply be aware of how poorly people perform when tasked with separating AI-generated faces from real ones. If we are more wary of our own limitations on this front, we may be less easily influenced by what we see online – and can take additional steps to verify information when it matters.

Public policy also plays an important role. One option is to require the use of AI to be declared. However, this might not help, or may inadvertently provide a false sense of security when AI is used for deceptive purposes – in which case it is almost impossible to police.

Another approach is to focus on authenticating trusted sources. Similar to the “Made in Australia” or “European CE tag”, applying a trusted source badge – which can be verified and has to be earned through rigorous checks – could help users select reliable media.

Read more: AI image generation is advancing at astronomical speeds. Can we still tell if a picture is fake?[14]

Read more https://theconversation.com/can-you-spot-the-ai-impostors-we-found-ai-faces-can-look-more-real-than-actual-humans-215160

Times Magazine

Data Management Isn't Just About Tech—Here’s Why It’s a Human Problem Too

Photo by Kevin Kuby Manuel O. Diaz Jr.We live in a world drowning in data. Every click, swipe, medical scan, and financial transaction generates information, so much that managing it all has become one of the biggest challenges of our digital age. Bu...

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Decline of Hyper-Casual: How Mid-Core Mobile Games Took Over in 2025

In recent years, the mobile gaming landscape has undergone a significant transformation, with mid-core mobile games emerging as the dominant force in app stores by 2025. This shift is underpinned by changing user habits and evolving monetization tr...

Understanding ITIL 4 and PRINCE2 Project Management Synergy

Key Highlights ITIL 4 focuses on IT service management, emphasising continual improvement and value creation through modern digital transformation approaches. PRINCE2 project management supports systematic planning and execution of projects wit...

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

From Beach Bops to Alpine Anthems: Your Sonos Survival Guide for a Long Weekend Escape

Alright, fellow adventurers and relaxation enthusiasts! So, you've packed your bags, charged your devices, and mentally prepared for that glorious King's Birthday long weekend. But hold on, are you really ready? Because a true long weekend warrior kn...

The Times Features

What Endo Took and What It Gave Me

From pain to purpose: how one woman turned endometriosis into a movement After years of misdiagnosis, hormone chaos, and major surgery, Jo Barry was done being dismissed. What beg...

Why Parents Must Break the Silence on Money and Start Teaching Financial Skills at Home

Australia’s financial literacy rates are in decline, and our kids are paying the price. Certified Money Coach and Financial Educator Sandra McGuire, who has over 20 years’ exp...

Australia’s Grill’d Transforms Operations with Qlik

Boosting Burgers and Business Clean, connected data powers real-time insights, smarter staffing, and standout customer experiences Sydney, Australia, 14 July 2025 – Qlik®, a g...

Tricia Paoluccio designer to the stars

The Case for Nuturing Creativity in the Classroom, and in our Lives I am an actress and an artist who has had the privilege of sharing my work across many countries, touring my ...

Duke of Dural to Get Rooftop Bar as New Owners Invest in Venue Upgrade

The Duke of Dural, in Sydney’s north-west, is set for a major uplift under new ownership, following its acquisition by hospitality group Good Beer Company this week. Led by resp...

Prefab’s Second Life: Why Australia’s Backyard Boom Needs a Circular Makeover

The humble granny flat is being reimagined not just as a fix for housing shortages, but as a cornerstone of circular, factory-built architecture. But are our systems ready to s...