The Times Australia
Google AI
The Times World News

.

Using your AI chatbot as a search engine? Be careful what you believe

  • Written by Kevin Veale, Senior Lecturer in Media Studies, School of Humanities, Media and Creative Communication, Te Kunenga ki Pūrehuroa – Massey University

During the first world war, the British government was looking for ways to help people stretch their limited food supplies. It found pamphlets from a noted 19th-century herbalist who said rhubarb leaves could be used as a vegetable along with the stalks.

The government duly printed its own pamphlets advising people to eat rhubarb leaves as a salad rather than throwing them out. There was one problem: rhubarb leaves can be poisonous[1]. People reportedly died or became ill.

The advice was corrected and the pamphlets pulled from circulation. But during the second world war, the government was again looking for ways to stretch food supplies.

It found a stockpile of old resources from the previous war that explained unorthodox sources of food, including rhubarb leaves. Reusing the pamphlets seemed an efficient thing to do, so they were sent out to the public. Once again, people reportedly died or became ill.

Those pamphlets were misinformation, but the public had no reason to suspect them either time. They were official resources developed by the government – why wouldn’t they be safe?

That is how misinformation can cause problems even after the initial error is corrected. And the moral of the story still reverberates in the age of generative artificial intelligence (AI).

Chatbots are not search engines

Generative AI is used to generate text and images (and other forms of data) based on original information it has ingested. But it can also be an engine for churning out misinformation faster than people can produce safe information, let alone fact-check and correct it[2].

And as the rhubarb story illustrates, corrections can’t always properly remove the original contamination.

AI platforms such as ChatGPT and Claude don’t work like a conventional search engine. But people use them as one because they seem to summarise complex topics quickly[3] and require fewer clicks than conventional internet searches.

Search engines rely on articles and text about a given topic, and then weigh how reliable those articles are[4]. Generative AI instead relies on huge bodies of text, from which it measures the odds of words appearing next to each other.

These “large language models[5]” are purely looking to generate reasonable-looking sentences, rather than accurate ones.

For example, if “green eggs and ham” appeared frequently enough in its huge pile of words, it is more likely to describe “eggs and ham” as green if someone asks.

‘Plausible yet incorrect’

OpenAI, which developed ChatGPT, has admitted (based on its own study) there’s no way to stop false information[6] being presented as truth due to the way generative AI works. Explaining why large language models “hallucinate”, the researchers wrote[7]:

Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty.

This can have real-world consequences. One recent study showed ChatGPT failed to recognise a medical emergency[8] in more than half of cases. This can be exacerbated by already existing errors in medical records, which a UK inquiry in 2025 found[9] affected up to one in four patients.

While a doctor might order more tests to confirm a diagnosis, one researcher explained[10] that generative AI “delivers the wrong answer with the exact same confidence as the right one”.

The problem, as another scientist noted[11], is that generative AI “finds and mimics patterns of words”. Being right or wrong is not really the point: “It was supposed to make a sentence and it did.”

Research has shown generative AI tools misrepresent the news[12] 45% of the time, no matter the language or geographic region. And there is now genuine concern about AI risking lives by generating non-existent hiking routes[13].

It’s easy to make fun of generative AI when it advises people[14] to eat rocks or hold toppings on a pizza base with glue.

But other examples aren’t so amusing – such as the supermarket meal planner that suggested a recipe that would produce chlorine gas[15], or the dietary advice that left someone with chronic toxic exposure to bromide[16].

Look for older information

Education and establishing good rules around the appropriate and cautious use of generative AI will be essential, especially as it makes inroads into governments, bureaucracies and complex organisations.

Politicians are already using generative AI[17] in their everyday work, including for policy research. And hospital emergency departments are using AI tools to record patient notes to save time[18].

One safeguard is to try to source more reliable information produced before AI-contaminated text and imagery infiltrated the internet.

There are even tools available to help simplify that process, including one created by Australian artist Tega Brain[19] “that will only return content created before ChatGPT’s first public release on November 30 2022”.

Finally, if your instinct is to fact-check the story at the start of this article, good old-fashioned books might be your best bet: references to how the British government twice encouraged rhubarb poisoning can be found in the The Poison Garden’s A-Z of Poisonous Plants[20] and Botanical Curses and Poisons: The Shadow Lives of Plants[21].

References

  1. ^ rhubarb leaves can be poisonous (www.healthline.com)
  2. ^ fact-check and correct it (journals.sagepub.com)
  3. ^ summarise complex topics quickly (www.moronichannel.org)
  4. ^ weigh how reliable those articles are (pi.math.cornell.edu)
  5. ^ large language models (www.ibm.com)
  6. ^ no way to stop false information (www.computerworld.com)
  7. ^ researchers wrote (arxiv.org)
  8. ^ failed to recognise a medical emergency (www.theguardian.com)
  9. ^ UK inquiry in 2025 found (www.theguardian.com)
  10. ^ one researcher explained (www.livescience.com)
  11. ^ another scientist noted (bsky.app)
  12. ^ misrepresent the news (www.bbc.co.uk)
  13. ^ generating non-existent hiking routes (www.insidehook.com)
  14. ^ advises people (www.bbc.com)
  15. ^ produce chlorine gas (www.theguardian.com)
  16. ^ chronic toxic exposure to bromide (www.livescience.com)
  17. ^ already using generative AI (newsroom.co.nz)
  18. ^ record patient notes to save time (www.rnz.co.nz)
  19. ^ created by Australian artist Tega Brain (tegabrain.com)
  20. ^ The Poison Garden’s A-Z of Poisonous Plants (alnwickgardenshop.com)
  21. ^ Botanical Curses and Poisons: The Shadow Lives of Plants (www.hachettebookgroup.com)

Read more https://theconversation.com/using-your-ai-chatbot-as-a-search-engine-be-careful-what-you-believe-277616

Times Magazine

Why Car Enthusiasts Are Turning to Container Shipping for Interstate Moves

Moving across the country requires careful planning and plenty of patience. The scale of domestic ...

What to know if you’re considering an EV

Soaring petrol prices are once again making many Australians think seriously[1] about switching ...

Epson launches ELPCS01 mobile projector cart

Designed for the EB-810E[1] projector and provides easy setup for portable displays in flexible ...

Governance Models for Headless CMS in Large Organizations

Where headless CMS is adopted by large enterprises, governance is the single most crucial factor d...

Narwal Freo Z10 Robotic Vacuum and Mop Cleaner

Narwal Freo Z10 Robotic Vacuum and Mop Cleaner  Rating: ★★★★☆ (4.4/5) Category: Premium Robot ...

Shark launches SteamSpot - the shortcut for everyday floor mess

Shark introduces the Shark SteamSpot Steam Mop, a lightweight steam mop designed to make everyda...

The Times Features

AI Is Already Here. The Question Is Whether Your Business Is Built for It

We sat down with Nirlep Adhikari — CTO at LoanOptions.ai and Founder of Mount Mindforce — to cut...

Cleared to Land — and Cleared to Die: How a Runway Failure Killed Two Pilots in Seconds

A modern passenger jet, operating under full clearance, descending onto a controlled runway at o...

Leader of The Nationals Matt Canavan - press conference

CANBERRA PARLIAMENT HOUSE PRESS CONFERENCE WITH SHADOW WATER MINISTER MICHAEL McCORMACK; MURRAY-DA...

The Power Of An Uncomfortable Love

How challenging relationships can help us grow. Never have we lived in a time where relationshi...

US country favourite Larry Fleet joins 2026 Gympie Music Muster

Tennessee singer-songwriter Larry Fleet will bring his band to the Gympie Music Muster on Friday...

56 OF YOUR FAVORITE DISNEY STARS SHINE BRIGHT IN DISNEY ON ICE PRESENTS MAGIC IN THE STARS!

The most Disney characters in one show and the on-ice debut of Raya from Raya and the Last Dragon...

How much do you really need to retire? It’s probably a lot less than $1 million

Every few months, someone in the superannuation industry declares that Australians now “need” ar...

South Australian Nationals to open up local oil from Great Australian Bight

Amid out-of-control inflation and impacts from the Middle East conflict, The South Australian Na...

How does your super balance compare to other people your age?

If you have ever checked your super balance and wondered whether you are “behind” for your age, ...