The Times Australia
The Times World News

.
The Times Real Estate

.

Both humans and AI hallucinate — but not in the same way

  • Written by Sarah Vivienne Bentley, Research Scientist, Responsible Innovation, Data61, CSIRO
Both humans and AI hallucinate — but not in the same way

The launch of ever-capable large language models (LLMs) such as GPT-3.5[1] has sparked much interest over the past six months. However, trust in these models has waned as users have discovered they can make mistakes[2] – and that, just like us, they aren’t perfect.

An LLM that outputs incorrect information is said to be “hallucinating”, and there is now a growing research effort towards minimising this effect. But as we grapple with this task, it’s worth reflecting on our own capacity for bias and hallucination – and how this impacts the accuracy of the LLMs we create.

By understanding the link between AI’s hallucinatory potential and our own, we can begin to create smarter AI systems that will ultimately help reduce human error.

How people hallucinate

It’s no secret people make up information. Sometimes we do this intentionally, and sometimes unintentionally. The latter is a result of cognitive biases, or “heuristics”: mental shortcuts we develop through past experiences.

These shortcuts are often born out of necessity. At any given moment, we can only process a limited amount of the information flooding our senses, and only remember a fraction of all the information we’ve ever been exposed to.

As such, our brains must use learnt associations to fill in the gaps and quickly respond to whatever question or quandary sits before us. In other words, our brains guess what the correct answer might be based on limited knowledge. This is called a “confabulation” and is an example of a human bias.

Our biases can result in poor judgement. Take the automation bias[3], which is our tendency to favour information generated by automated systems (such as ChatGPT) over information from non-automated sources. This bias can lead us to miss errors and even act upon false information.

Another relevant heuristic is the halo effect[4], in which our initial impression of something affects our subsequent interactions with it. And the fluency bias[5], which describes how we favour information presented in an easy-to-read manner.

The bottom line is human thinking is often coloured by its own cognitive biases and distortions, and these “hallucinatory” tendencies largely occur outside of our awareness.

How AI hallucinates

In an LLM context, hallucinating is different. An LLM isn’t trying to conserve limited mental resources to efficiently make sense of the world. “Hallucinating” in this context just describes a failed attempt to predict a suitable response to an input.

Nevertheless, there is still some similarity between how humans and LLMs hallucinate, since LLMs also do this to “fill in the gaps”.

LLMs generate a response by predicting which word is most likely to appear next in a sequence, based on what has come before, and on associations the system has learned through training.

Like humans, LLMs try to predict the most likely response. Unlike humans, they do this without understanding what they’re saying. This is how they can end up outputting nonsense.

As to why LLMs hallucinate, there are a range of factors. A major one is being trained on data that are flawed or insufficient. Other factors include how the system is programmed to learn from these data, and how this programming is reinforced through further training under humans.

Read more: AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time?[6]

Doing better together

So, if both humans and LLMs are susceptible to hallucinating (albeit for different reasons), which is easier to fix?

Fixing the training data and processes underpinning LLMs might seem easier than fixing ourselves. But this fails to consider the human factors that influence AI systems (and is an example of yet another human bias known as a fundamental attribution error[7]).

The reality is our failings and the failings of our technologies are inextricably intertwined, so fixing one will help fix the other. Here are some ways we can do this.

  • Responsible data management. Biases in AI often stem from biased or limited training data. Ways to address this include ensuring training data are diverse and representative, building bias-aware algorithms, and deploying techniques such as data balancing to remove skewed or discriminatory patterns.

  • Transparency and explainable AI. Despite the above actions, however, biases in AI can remain and can be difficult to detect. By studying how biases can enter a system and propagate within it, we can better explain the presence of bias in outputs. This is the basis of “explainable AI”, which is aimed at making AI systems’ decision-making processes more transparent.

  • Putting the public’s interests front and centre. Recognising, managing and learning from biases in an AI requires human accountability and having human values integrated into AI systems. Achieving this means ensuring stakeholders are representative of people from diverse backgrounds, cultures and perspectives.

By working together in this way, it’s possible for us to build smarter AI systems that can help keep all our hallucinations in check.

For instance, AI is being used within healthcare to analyse human decisions. These machine learning systems detect inconsistencies in human data and provide prompts that bring them to the clinician’s attention. As such, diagnostic decisions can be improved while maintaining human accountability[8].

In a social media context, AI is being used to help train human moderators when trying to identify abuse, such as through the Troll Patrol[9] project aimed at tackling online violence against women.

In another example, combining AI and satellite imagery[10] can help researchers analyse differences in nighttime lighting across regions, and use this as a proxy for the relative poverty of an area (wherein more lighting is correlated with less poverty).

Importantly, while we do the essential work of improving the accuracy of LLMs, we shouldn’t ignore how their current fallibility holds up a mirror to our own.

References

  1. ^ such as GPT-3.5 (help.openai.com)
  2. ^ make mistakes (spectrum.ieee.org)
  3. ^ automation bias (dataethics.eu)
  4. ^ halo effect (www.verywellmind.com)
  5. ^ fluency bias (www.researchgate.net)
  6. ^ AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time? (theconversation.com)
  7. ^ fundamental attribution error (online.hbs.edu)
  8. ^ maintaining human accountability (link.springer.com)
  9. ^ Troll Patrol (decoders.amnesty.org)
  10. ^ satellite imagery (www.science.org)

Read more https://theconversation.com/both-humans-and-ai-hallucinate-but-not-in-the-same-way-205754

The Times Features

AJE Resort ‘26 — “IMPRESSION”

Photographed by Cesar Ocampo | AFW 2025 Day 3, Barangaroo Pier Pavilion There are runways, and then there are moments. Aje’s Resort ‘26 collection, IMPRESSION, wasn’t just a fashi...

Miimi & Jiinda: Weaving Culture, Connection, and Country into Every Thread

By Cesar Ocampo When I sat down with Melissa Greenwood and her mother, Lauren Jarrett—founders of the First Nations brand Miimi & Jiinda—I knew this wasn’t going to be your st...

American Express to Provide $3.95M in Support for Restaurants Worldwide with 2025 “Backing Small” Grant Programs

Sydney, Australia 14 May 2025 – Applications are now open to small business owners who qualify for one  of American Express’ signature grant programs in 2025: Backing Internati...

FARAGE Summer '26 Brings Back the Power Suit — with Edge

Words & Photography by Cesar Ocampo On Day 2 of Australian Fashion Week, I stepped into the FARAGE Summer ’26 runway show not quite knowing what to expect—but walked away thin...

BEARE PARK Pre-Fall 2025 at Australian Fashion Week

Words & Photography by Cesar Ocampo There’s something about BEARE PARK that instantly pulls you in—not with noise, but with a kind of quiet confidence. On Day 2 of Australian ...

Understanding Structured Insurance for Multi-Unit Buildings with Shared Ownership and Common Spaces

When multiple individuals share walls, rooftops, and responsibility for communal spaces, the web of accountability becomes more intricate than it first appears. Beyond the bricks...

Times Magazine

Senior of the Year Nominations Open

The Allan Labor Government is encouraging all Victorians to recognise the valuable contributions of older members of our community by nominating them for the 2025 Victorian Senior of the Year Awards.  Minister for Ageing Ingrid Stitt today annou...

CNC Machining Meets Stage Design - Black Swan State Theatre Company & Tommotek

When artistry meets precision engineering, incredible things happen. That’s exactly what unfolded when Tommotek worked alongside the Black Swan State Theatre Company on several of their innovative stage productions. With tight deadlines and intrica...

Uniden Baby Video Monitor Review

Uniden has released another award-winning product as part of their ‘Baby Watch’ series. The BW4501 Baby Monitor is an easy to use camera for keeping eyes and ears on your little one. The camera is easy to set up and can be mounted to the wall or a...

Top Benefits of Hiring Commercial Electricians for Your Business

When it comes to business success, there are no two ways about it: qualified professionals are critical. While many specialists are needed, commercial electricians are among the most important to have on hand. They are directly involved in upholdin...

The Essential Guide to Transforming Office Spaces for Maximum Efficiency

Why Office Fitouts MatterA well-designed office can make all the difference in productivity, employee satisfaction, and client impressions. Businesses of all sizes are investing in updated office spaces to create environments that foster collaborat...

The A/B Testing Revolution: How AI Optimized Landing Pages Without Human Input

A/B testing was always integral to the web-based marketing world. Was there a button that converted better? Marketing could pit one against the other and see which option worked better. This was always through human observation, and over time, as d...

LayBy Shopping