The Times Australia
The Times World News

.
Times Media

.

Both humans and AI hallucinate — but not in the same way

  • Written by Sarah Vivienne Bentley, Research Scientist, Responsible Innovation, Data61, CSIRO
Both humans and AI hallucinate — but not in the same way

The launch of ever-capable large language models (LLMs) such as GPT-3.5[1] has sparked much interest over the past six months. However, trust in these models has waned as users have discovered they can make mistakes[2] – and that, just like us, they aren’t perfect.

An LLM that outputs incorrect information is said to be “hallucinating”, and there is now a growing research effort towards minimising this effect. But as we grapple with this task, it’s worth reflecting on our own capacity for bias and hallucination – and how this impacts the accuracy of the LLMs we create.

By understanding the link between AI’s hallucinatory potential and our own, we can begin to create smarter AI systems that will ultimately help reduce human error.

How people hallucinate

It’s no secret people make up information. Sometimes we do this intentionally, and sometimes unintentionally. The latter is a result of cognitive biases, or “heuristics”: mental shortcuts we develop through past experiences.

These shortcuts are often born out of necessity. At any given moment, we can only process a limited amount of the information flooding our senses, and only remember a fraction of all the information we’ve ever been exposed to.

As such, our brains must use learnt associations to fill in the gaps and quickly respond to whatever question or quandary sits before us. In other words, our brains guess what the correct answer might be based on limited knowledge. This is called a “confabulation” and is an example of a human bias.

Our biases can result in poor judgement. Take the automation bias[3], which is our tendency to favour information generated by automated systems (such as ChatGPT) over information from non-automated sources. This bias can lead us to miss errors and even act upon false information.

Another relevant heuristic is the halo effect[4], in which our initial impression of something affects our subsequent interactions with it. And the fluency bias[5], which describes how we favour information presented in an easy-to-read manner.

The bottom line is human thinking is often coloured by its own cognitive biases and distortions, and these “hallucinatory” tendencies largely occur outside of our awareness.

How AI hallucinates

In an LLM context, hallucinating is different. An LLM isn’t trying to conserve limited mental resources to efficiently make sense of the world. “Hallucinating” in this context just describes a failed attempt to predict a suitable response to an input.

Nevertheless, there is still some similarity between how humans and LLMs hallucinate, since LLMs also do this to “fill in the gaps”.

LLMs generate a response by predicting which word is most likely to appear next in a sequence, based on what has come before, and on associations the system has learned through training.

Like humans, LLMs try to predict the most likely response. Unlike humans, they do this without understanding what they’re saying. This is how they can end up outputting nonsense.

As to why LLMs hallucinate, there are a range of factors. A major one is being trained on data that are flawed or insufficient. Other factors include how the system is programmed to learn from these data, and how this programming is reinforced through further training under humans.

Read more: AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time?[6]

Doing better together

So, if both humans and LLMs are susceptible to hallucinating (albeit for different reasons), which is easier to fix?

Fixing the training data and processes underpinning LLMs might seem easier than fixing ourselves. But this fails to consider the human factors that influence AI systems (and is an example of yet another human bias known as a fundamental attribution error[7]).

The reality is our failings and the failings of our technologies are inextricably intertwined, so fixing one will help fix the other. Here are some ways we can do this.

  • Responsible data management. Biases in AI often stem from biased or limited training data. Ways to address this include ensuring training data are diverse and representative, building bias-aware algorithms, and deploying techniques such as data balancing to remove skewed or discriminatory patterns.

  • Transparency and explainable AI. Despite the above actions, however, biases in AI can remain and can be difficult to detect. By studying how biases can enter a system and propagate within it, we can better explain the presence of bias in outputs. This is the basis of “explainable AI”, which is aimed at making AI systems’ decision-making processes more transparent.

  • Putting the public’s interests front and centre. Recognising, managing and learning from biases in an AI requires human accountability and having human values integrated into AI systems. Achieving this means ensuring stakeholders are representative of people from diverse backgrounds, cultures and perspectives.

By working together in this way, it’s possible for us to build smarter AI systems that can help keep all our hallucinations in check.

For instance, AI is being used within healthcare to analyse human decisions. These machine learning systems detect inconsistencies in human data and provide prompts that bring them to the clinician’s attention. As such, diagnostic decisions can be improved while maintaining human accountability[8].

In a social media context, AI is being used to help train human moderators when trying to identify abuse, such as through the Troll Patrol[9] project aimed at tackling online violence against women.

In another example, combining AI and satellite imagery[10] can help researchers analyse differences in nighttime lighting across regions, and use this as a proxy for the relative poverty of an area (wherein more lighting is correlated with less poverty).

Importantly, while we do the essential work of improving the accuracy of LLMs, we shouldn’t ignore how their current fallibility holds up a mirror to our own.

References

  1. ^ such as GPT-3.5 (help.openai.com)
  2. ^ make mistakes (spectrum.ieee.org)
  3. ^ automation bias (dataethics.eu)
  4. ^ halo effect (www.verywellmind.com)
  5. ^ fluency bias (www.researchgate.net)
  6. ^ AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time? (theconversation.com)
  7. ^ fundamental attribution error (online.hbs.edu)
  8. ^ maintaining human accountability (link.springer.com)
  9. ^ Troll Patrol (decoders.amnesty.org)
  10. ^ satellite imagery (www.science.org)

Read more https://theconversation.com/both-humans-and-ai-hallucinate-but-not-in-the-same-way-205754

The Times Features

HCF’s Healthy Hearts Roadshow Wraps Up 2024 with a Final Regional Sprint

Next week marks the final leg of the HCF Healthy Hearts Roadshow for 2024, bringing free heart health checks to some of NSW’s most vibrant regional communities. As Australia’s ...

The Budget-Friendly Traveler: How Off-Airport Car Hire Can Save You Money

When planning a trip, transportation is one of the most crucial considerations. For many, the go-to option is renting a car at the airport for convenience. But what if we told ...

Air is an overlooked source of nutrients – evidence shows we can inhale some vitamins

You know that feeling you get when you take a breath of fresh air in nature? There may be more to it than a simple lack of pollution. When we think of nutrients, we think of t...

FedEx Australia Announces Christmas Shipping Cut-Off Dates To Help Beat the Holiday Rush

With Christmas just around the corner, FedEx is advising Australian shoppers to get their presents sorted early to ensure they arrive on time for the big day. FedEx has reveale...

Will the Wage Price Index growth ease financial pressure for households?

The Wage Price Index’s quarterly increase of 0.8% has been met with mixed reactions. While Australian wages continue to increase, it was the smallest increase in two and a half...

Back-to-School Worries? 70% of Parents Fear Their Kids Aren’t Ready for Day On

Australian parents find themselves confronting a key decision: should they hold back their child on the age border for another year before starting school? Recent research from...

Times Magazine

Elevate Your Off-Road Experience with Ozzytyres’ 4x4 Wheel and Tyre Packages

The right wheel and tyre package can make all the difference between a thrilling adventure and a frustrating experience. An extensive range of high-quality 4x4 wheel and tyre packages from Ozzytyres can help you. They are designed to elevate your v...

The Joy of Shopping for Fabric at Your Fingertips

Benefits of Online Fabric Shopping In today's world, the internet has changed how we shop for items. Shopping online has become a popular option for many consumers and one area that is gaining traction is online fabric shopping. There are several ...

The Concepts of IGCSE Physics

IGCSE Physics is an internationally recognized qualification that is offered to students in many countries around the world. It is a course of study that covers a wide range of topics related to physics, including general physics, mechanics, electr...

The Benefits of Outsourcing Custom Software Development Services to an Agile Development Company

In the fast-paced technological world of today, businesses are always looking for new methods to improve their operations, and the creation of custom software has become a crucial component of this process. Nevertheless, not every technology comp...

The Benefits of Collaborative Family Law for Amicable Resolutions

Looking to resolve their disputes outside of court often find themselves exploring various options to reach a peaceful resolution. Whether it involves co-parenting arrangements, financial settlements, or future planning, there are methods designe...

Waave launches ‘Wallet’ for Pay by Bank with Australian-first biometric security

Payments technology and Open Banking leader Waave today announces the introduction of the Waave Wallet to house its upgraded Pay by Bank product, a real-time account-to-account payment method which now features industry-leading biometric security...