Google AI
The Times Australia
The Times World News

.

Using your AI chatbot as a search engine? Be careful what you believe

  • Written by Kevin Veale, Senior Lecturer in Media Studies, School of Humanities, Media and Creative Communication, Te Kunenga ki Pūrehuroa – Massey University

During the first world war, the British government was looking for ways to help people stretch their limited food supplies. It found pamphlets from a noted 19th-century herbalist who said rhubarb leaves could be used as a vegetable along with the stalks.

The government duly printed its own pamphlets advising people to eat rhubarb leaves as a salad rather than throwing them out. There was one problem: rhubarb leaves can be poisonous[1]. People reportedly died or became ill.

The advice was corrected and the pamphlets pulled from circulation. But during the second world war, the government was again looking for ways to stretch food supplies.

It found a stockpile of old resources from the previous war that explained unorthodox sources of food, including rhubarb leaves. Reusing the pamphlets seemed an efficient thing to do, so they were sent out to the public. Once again, people reportedly died or became ill.

Those pamphlets were misinformation, but the public had no reason to suspect them either time. They were official resources developed by the government – why wouldn’t they be safe?

That is how misinformation can cause problems even after the initial error is corrected. And the moral of the story still reverberates in the age of generative artificial intelligence (AI).

Chatbots are not search engines

Generative AI is used to generate text and images (and other forms of data) based on original information it has ingested. But it can also be an engine for churning out misinformation faster than people can produce safe information, let alone fact-check and correct it[2].

And as the rhubarb story illustrates, corrections can’t always properly remove the original contamination.

AI platforms such as ChatGPT and Claude don’t work like a conventional search engine. But people use them as one because they seem to summarise complex topics quickly[3] and require fewer clicks than conventional internet searches.

Search engines rely on articles and text about a given topic, and then weigh how reliable those articles are[4]. Generative AI instead relies on huge bodies of text, from which it measures the odds of words appearing next to each other.

These “large language models[5]” are purely looking to generate reasonable-looking sentences, rather than accurate ones.

For example, if “green eggs and ham” appeared frequently enough in its huge pile of words, it is more likely to describe “eggs and ham” as green if someone asks.

‘Plausible yet incorrect’

OpenAI, which developed ChatGPT, has admitted (based on its own study) there’s no way to stop false information[6] being presented as truth due to the way generative AI works. Explaining why large language models “hallucinate”, the researchers wrote[7]:

Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty.

This can have real-world consequences. One recent study showed ChatGPT failed to recognise a medical emergency[8] in more than half of cases. This can be exacerbated by already existing errors in medical records, which a UK inquiry in 2025 found[9] affected up to one in four patients.

While a doctor might order more tests to confirm a diagnosis, one researcher explained[10] that generative AI “delivers the wrong answer with the exact same confidence as the right one”.

The problem, as another scientist noted[11], is that generative AI “finds and mimics patterns of words”. Being right or wrong is not really the point: “It was supposed to make a sentence and it did.”

Research has shown generative AI tools misrepresent the news[12] 45% of the time, no matter the language or geographic region. And there is now genuine concern about AI risking lives by generating non-existent hiking routes[13].

It’s easy to make fun of generative AI when it advises people[14] to eat rocks or hold toppings on a pizza base with glue.

But other examples aren’t so amusing – such as the supermarket meal planner that suggested a recipe that would produce chlorine gas[15], or the dietary advice that left someone with chronic toxic exposure to bromide[16].

Look for older information

Education and establishing good rules around the appropriate and cautious use of generative AI will be essential, especially as it makes inroads into governments, bureaucracies and complex organisations.

Politicians are already using generative AI[17] in their everyday work, including for policy research. And hospital emergency departments are using AI tools to record patient notes to save time[18].

One safeguard is to try to source more reliable information produced before AI-contaminated text and imagery infiltrated the internet.

There are even tools available to help simplify that process, including one created by Australian artist Tega Brain[19] “that will only return content created before ChatGPT’s first public release on November 30 2022”.

Finally, if your instinct is to fact-check the story at the start of this article, good old-fashioned books might be your best bet: references to how the British government twice encouraged rhubarb poisoning can be found in the The Poison Garden’s A-Z of Poisonous Plants[20] and Botanical Curses and Poisons: The Shadow Lives of Plants[21].

References

  1. ^ rhubarb leaves can be poisonous (www.healthline.com)
  2. ^ fact-check and correct it (journals.sagepub.com)
  3. ^ summarise complex topics quickly (www.moronichannel.org)
  4. ^ weigh how reliable those articles are (pi.math.cornell.edu)
  5. ^ large language models (www.ibm.com)
  6. ^ no way to stop false information (www.computerworld.com)
  7. ^ researchers wrote (arxiv.org)
  8. ^ failed to recognise a medical emergency (www.theguardian.com)
  9. ^ UK inquiry in 2025 found (www.theguardian.com)
  10. ^ one researcher explained (www.livescience.com)
  11. ^ another scientist noted (bsky.app)
  12. ^ misrepresent the news (www.bbc.co.uk)
  13. ^ generating non-existent hiking routes (www.insidehook.com)
  14. ^ advises people (www.bbc.com)
  15. ^ produce chlorine gas (www.theguardian.com)
  16. ^ chronic toxic exposure to bromide (www.livescience.com)
  17. ^ already using generative AI (newsroom.co.nz)
  18. ^ record patient notes to save time (www.rnz.co.nz)
  19. ^ created by Australian artist Tega Brain (tegabrain.com)
  20. ^ The Poison Garden’s A-Z of Poisonous Plants (alnwickgardenshop.com)
  21. ^ Botanical Curses and Poisons: The Shadow Lives of Plants (www.hachettebookgroup.com)

Read more https://theconversation.com/using-your-ai-chatbot-as-a-search-engine-be-careful-what-you-believe-277616

Times Magazine

Why Is Professional Porsche Servicing Important for Performance and Longevity?

Owning a Porsche is a symbol of precision engineering, luxury, and high performance. To maintain t...

6 ways your smartwatch is lying to you, according to science

You check your smartwatch after a run. Your fitness score has dropped. You’ve burnt hardly any...

Has the adoption of electric vehicles led to new forms of electricity theft

Why the concern exists Electric vehicles (EVs) like the Tesla Model 3 or Nissan Leaf shift “fue...

Adobe Ushers in a New Era of Creativity with New Creative Agent and Generative AI Innovations in Adobe Firefly

Adobe (Nasdaq: ADBE) — the global technology leader that unleashes creativity, productivity and ...

CRO Tech Stack: A Technical Guide to Conversion Rate Optimization Tools

The fascinating thing is that the value of this website lies in the fact that creating a high-cali...

How Decentralised Applications Are Reshaping Enterprise Software in Australia

Australian businesses are experiencing a quiet revolution in how they manage data, execute agreeme...

The Times Features

The Coalition wants NDIS reform to focus on 3 things. H…

The government is expected to announce further changes to the National Disability Insurance Sche...

Power Bills: What Are the Options to Decrease What a Fa…

Australian households are being told, repeatedly, to “use less power.” Turn off lights. Shorten...

The Times Launches Dedicated Property Advertising Platf…

In a significant expansion of its digital media offering, The Times has formally launched TimesA...

Can I get a free flu shot? And will it cover ‘super K’?…

For many of us, flu can mean a nasty few weeks of illness. But for the very young and old, and...

Mother’s Day, The Lodge Dining Room

Her Day, The Lodge Way This Mother’s Day, The Lodge Dining Room presents a refined take on high...

The Albanese Government’s plan to impose a retrospectiv…

LABOR’S RETROSPECTIVE TAX GRAB RISKS 3 MILLION JOBS The Albanese Government’s plan to impose a retr...

Court outcome reinforces wildlife trafficking will not …

A 20-year-old man has been fined close to $50,000 and ordered to pay costs after pleading guilty t...

Businesses tap UOW PhD researchers to accelerate innova…

Industry internship program connects businesses with research talent to fast-track innovation an...

Olivia Colman, Kate Box to join an exclusive Live Q…

Photo credit : Photo Credit Mark De BlokFresh out of cinemas, JIMPA - the new film by acclaimed di...