The Times Australia
The Times World News

.

Do you trust AI to write the news? It already is – and not without issues

  • Written by Rob Nicholls, Associate professor of regulation and governance, UNSW Sydney
Do you trust AI to write the news? It already is – and not without issues

Businesses are increasingly using artificial intelligence (AI) to generate media content, including news, to engage their customers. Now, we’re even seeing AI used for the “gamification” of news – that is, to create interactivity associated with news content.

For better or worse, AI is changing the nature of news media. And we’ll have to wise up if we want to protect the integrity of this institution.

How did she die?

Imagine you’re reading a tragic article about the death of a young[1] sports coach at a prestigious Sydney school.

In a box to the right is a poll asking you to speculate about the cause of death. The poll is AI-generated. It’s designed to keep you engaged with the story, as this will make you more likely to respond to advertisements provided by the poll’s operator.

This scenario isn’t hypothetical. It was played out in The Guardian’s recent reporting[2] on the death of Lilie James.

Under a licensing agreement, Microsoft republished The Guardian’s story[3] on its news app and website Microsoft Start. The poll was based on the content of the article and displayed alongside it, but The Guardian had no involvement[4] or control over it.

If the article had been about an upcoming sports fixture, a poll on the likely outcome would have been harmless. Yet this example shows how problematic it can be when AI starts to mingle with news pages, a product traditionally curated by experts.

The incident led to reasonable anger. In a letter to Microsoft president Brad Smith, Guardian Media Group chief executive Anna Bateson said it was “an inappropriate use of genAI [generative AI]”, which caused “significant reputational damage” to The Guardian and the journalist who wrote the story.

Naturally, the poll was removed. But it raises the question: why did Microsoft let it happen in the first place?

The consequence of omitting common sense

The first part of the answer is that supplementary news products such as polls and quizzes actually do engage[5] readers, as research[6] by the Center for Media Engagement at the University of Texas has found.

Given how cheap it is to use AI for this purpose, it seems likely news businesses (and businesses displaying others’ news) will continue to do so.

The second part of the answer is there was no “human in the loop”, or limited human involvement, in the Microsoft incident.

The major providers of large language models – the models that underpin various AI programs – have a financial and reputational incentive to make sure their programs don’t cause harm. Open AI with its GPT- models and DAll-E[7], Google with PaLM 2 (used in Bard[8]), and Meta with its downloadable Llama 2[9] have all made significant efforts to ensure their models don’t generate harmful content.

They often do this through a process called “reinforcement learning”, where humans curate responses to questions that might lead to harm. But this doesn’t always prevent the models from producing inappropriate content.

Read more: Ageism, sexism, classism and more: 7 examples of bias in AI-generated images[10]

It’s likely Microsoft was relying on the low-harm aspects of its AI, rather than considering how to minimise harm that may arise through the actual use of the model. The latter requires common sense – a trait that can’t be programmed into[11] large language models.

Thousands of AI-generated articles per week

Generative AI is becoming accessible and affordable. This makes it attractive to commercial news businesses, which have been reeling from losses of revenue[12]. As such, we’re now seeing AI “write” news stories, saving companies from having to pay journalist salaries.

In June, News Corp executive chair Michael Miller revealed[13] the company had a small team that produced about 3,000 articles a week[14] using AI.

Essentially, the team of four ensures the content makes sense and doesn’t include “hallucinations[15]”: false information made up by a model when it can’t predict a suitable response to an input.

While this news is likely to be accurate, the same tools can be used to generate potentially misleading content parading as news, and nearly indistinguishable from articles written by professional journalists.

Since April, a NewsGuard investigation has found hundreds[16] of websites, written in several languages, that are mostly or entirely generated by AI to mimic real news sites. Some of these included harmful misinformation, such as the claim that US President Joe Biden had died[17].

It’s thought the sites, which were teeming with ads, were likely generated to get ad revenue.

Read more: This week's changes are a win for Facebook, Google and the government — but what was lost along the way?[18]

As technology advances, so does risk

Generally, many large language models have been limited by their underlying training data. For instance, models trained on data up to 2021 will not provide accurate “news” about the world’s events in 2022.

However, this is changing, as models can now be fine-tuned to respond to particular sources. In recent months, the use of an AI framework called “retrieval augmented generation[19]” has evolved to allow models to use very recent data.

With this method, it would certainly be possible to use licensed content from a small number of news wires to create a news website.

While this may be convenient from a business standpoint, it’s yet one more potential way that AI could push humans out of the loop in the process of news creation and dissemination.

An editorially curated news page is a valuable and well-thought-out product. Leaving AI to do this work could expose us to all kinds of misinformation and bias (especially without human oversight), or result in a lack of important localised coverage.

Cutting corners could make us all losers

Australia’s News Media Bargaining Code was designed to “level the playing field” between big tech and media businesses. Since the code came into effect, a secondary change is now flowing in from the use of generative AI.

Putting aside click-worthiness, there’s currently no comparison between the quality of news a journalist can produce and what AI can produce.

While generative AI could help augment the work of journalists, such as by helping them sort through large amounts of content, we have a lot to lose if we start to view it as a replacement.

Read more: Dumbing down or wising up: how will generative AI change the way we think?[20]

References

  1. ^ death of a young (www.theguardian.com)
  2. ^ The Guardian’s recent reporting (www.theguardian.com)
  3. ^ republished The Guardian’s story (www.msn.com)
  4. ^ The Guardian had no involvement (www.theguardian.com)
  5. ^ actually do engage (www.twipemobile.com)
  6. ^ research (mediaengagement.org)
  7. ^ GPT- models and DAll-E (platform.openai.com)
  8. ^ Bard (bard.google.com)
  9. ^ downloadable Llama 2 (ai.meta.com)
  10. ^ Ageism, sexism, classism and more: 7 examples of bias in AI-generated images (theconversation.com)
  11. ^ programmed into (theconversation.com)
  12. ^ losses of revenue (www.forbes.com)
  13. ^ Michael Miller revealed (wan-ifra.org)
  14. ^ 3,000 articles a week (www.theguardian.com)
  15. ^ hallucinations (theconversation.com)
  16. ^ has found hundreds (www.newsguardtech.com)
  17. ^ had died (web.archive.org)
  18. ^ This week's changes are a win for Facebook, Google and the government — but what was lost along the way? (theconversation.com)
  19. ^ retrieval augmented generation (research.ibm.com)
  20. ^ Dumbing down or wising up: how will generative AI change the way we think? (theconversation.com)

Read more https://theconversation.com/do-you-trust-ai-to-write-the-news-it-already-is-and-not-without-issues-216909

Times Magazine

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

Data Management Isn't Just About Tech—Here’s Why It’s a Human Problem Too

Photo by Kevin Kuby Manuel O. Diaz Jr.We live in a world drowning in data. Every click, swipe, medical scan, and financial transaction generates information, so much that managing it all has become one of the biggest challenges of our digital age. Bu...

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Decline of Hyper-Casual: How Mid-Core Mobile Games Took Over in 2025

In recent years, the mobile gaming landscape has undergone a significant transformation, with mid-core mobile games emerging as the dominant force in app stores by 2025. This shift is underpinned by changing user habits and evolving monetization tr...

Understanding ITIL 4 and PRINCE2 Project Management Synergy

Key Highlights ITIL 4 focuses on IT service management, emphasising continual improvement and value creation through modern digital transformation approaches. PRINCE2 project management supports systematic planning and execution of projects wit...

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

The Times Features

Is our mental health determined by where we live – or is it the other way round? New research sheds more light

Ever felt like where you live is having an impact on your mental health? Turns out, you’re not imagining things. Our new analysis[1] of eight years of data from the New Zeal...

Going Off the Beaten Path? Here's How to Power Up Without the Grid

There’s something incredibly freeing about heading off the beaten path. No traffic, no crowded campsites, no glowing screens in every direction — just you, the landscape, and the...

West HQ is bringing in a season of culinary celebration this July

Western Sydney’s leading entertainment and lifestyle precinct is bringing the fire this July and not just in the kitchen. From $29 lobster feasts and award-winning Asian banque...

What Endo Took and What It Gave Me

From pain to purpose: how one woman turned endometriosis into a movement After years of misdiagnosis, hormone chaos, and major surgery, Jo Barry was done being dismissed. What beg...

Why Parents Must Break the Silence on Money and Start Teaching Financial Skills at Home

Australia’s financial literacy rates are in decline, and our kids are paying the price. Certified Money Coach and Financial Educator Sandra McGuire, who has over 20 years’ exp...

Australia’s Grill’d Transforms Operations with Qlik

Boosting Burgers and Business Clean, connected data powers real-time insights, smarter staffing, and standout customer experiences Sydney, Australia, 14 July 2025 – Qlik®, a g...