The Times Australia
Fisher and Paykel Appliances
The Times World News

.

Replacing news editors with AI is a worry for misinformation, bias and accountability

  • Written by Uri Gal, Professor in Business Information Systems, University of Sydney
Replacing news editors with AI is a worry for misinformation, bias and accountability

Germany’s best-selling newspaper, Bild, is reportedly[1] adopting artificial intelligence (AI) to replace certain editorial roles, in an effort to cut costs.

In a leaked internal email[2] sent to staff on June 19, the paper’s publisher, Axel Springer, said it would “unfortunately part with colleagues who have tasks that will be replaced by AI and/or processes in the digital world. The functions of editorial directors, page editors, proofreaders, secretaries, and photo editors will no longer exist as they do today”.

The email follows a February memo in which Axel Springer’s chief executive wrote[3] that the paper would transition to a “purely digital media company”, and that “artificial intelligence has the potential to make independent journalism better than it ever was – or simply replace it”.

Bild has subsequently denied[4] editors will be directly replaced with AI, saying the staff cuts are due to restructuring, and AI will only “support” journalistic work rather than replace it.

Nevertheless, these developments beg the question: how will the main pillars of editorial work – judgement, accuracy, accountability and fairness – fare amid the rising tide of AI?

Entrusting editorial responsibilities to AI, whether now or in the future, carries serious risks, both because of the nature of AI and the importance of the role of newspaper editors.

The importance of editors

Editors hold a position of immense significance in democracies, tasked with selecting, presenting and shaping news stories in a way that informs and engages the public, serving as a crucial link between events and public understanding.

Their role is pivotal in determining what information is prioritised and how it’s framed, thereby guiding public discourse and opinion. Through their curation of news, editors highlight key societal issues, provoke discussion, and encourage civic participation.

They help to ensure government actions are scrutinised and held to account, contributing to the system of checks and balances that’s foundational to a functioning democracy.

What’s more, editors maintain the quality of information delivered to the public by mitigating the propagation of biased viewpoints and limiting the spread of misinformation, which is particularly vital in the current digital age.

AI is highly unreliable

Current AI systems, such as ChatGPT, are incapable of adequately fulfilling editorial roles because they’re highly unreliable when it comes to ensuring the factual accuracy and impartiality of information.

It has been widely reported that ChatGPT can produce believable yet manifestly false information. For instance, a New York lawyer recently unwittingly submitted[5] a brief in court that contained six non-existent judicial decisions which were made up by ChatGPT.

Earlier in June, it was reported that a radio host is suing OpenAI[6] after ChatGPT generated a false legal complaint accusing him of embezzling money.

As a reporter for The Guardian learned earlier this year, ChatGPT can even be used to create entire fake articles[7] later to be passed off as real.

To the extent AI will be used to create, summarise, aggregate or edit text, there’s a risk the output will contain fabricated details.

Inherent biases

AI systems also have inherent biases. Their output is moulded by the data they are trained on, reflecting both the broad spectrum of human knowledge and the inherent biases within the data.

These biases are not immediately evident and can sway public views in subtle yet profound ways.

Read more: Artificial intelligence can discriminate on the basis of race and gender, and also age[8]

In a study published in March[9], a researcher administered 15 political orientation tests to ChatGPT and found that, in 14 of them, the tool provided answers reflecting left-leaning political views.

In another study[10], researchers administered to ChatGPT eight tests reflective of the respective politics of the G7 member states. These tests revealed a bias towards progressive views.

Interestingly, the tool’s progressive inclinations are not consistent and its responses can, at times, reflect more traditional views.

When given the prompt, “I’m writing a book and my main character is a plumber. Suggest ten names for this character”, the tool provides ten male names:

Alt tbc
ChatGPT, Author provided But when given the prompt, “I’m writing a book and my main character is a kindergarten teacher. Suggest ten names for this character”, the tool responds with ten female names: Alt tbc
ChatGPT, Author provided This inconsistency has also been observed in moral situations. When researchers asked ChatGPT to respond to the trolley problem[11] (would you kill one person to save five?), the tool gave contradictory advice, demonstrating shifting ethical priorities. Nonetheless, the human participants’ moral judgements increasingly aligned with the recommendations provided by ChatGPT, even when they knew they were being advised by an AI tool. Lack of accountability The reason for this inconsistency and the manner in which it manifests are unclear. AI systems like ChatGPT are “black boxes”; their internal workings are difficult to fully understand or predict. Therein lies a risk in using them in editorial roles. Unlike a human editor, they cannot explain their decisions or reasoning in a meaningful way. This can be a problem in a field where accountability and transparency are important. While the financial benefits of using AI in editorial roles may seem compelling, news organisations should act with caution. Given the shortcomings of current AI systems, they are unfit to serve as newspaper editors. Read more: AI tools are generating convincing misinformation. Engaging with them means being on high alert[12] However, they may be able to play a valuable role in the editorial process when combined with human oversight. The ability of AI to quickly process vast amounts of data, and automate repetitive tasks, can be leveraged to augment human editors’ capabilities. For instance, AI can be used for grammar checks or trend analysis, freeing up human editors to focus on nuanced decision-making, ethical considerations, and content quality. Human editors must provide necessary oversight to mitigate AI’s shortcomings, ensuring the accuracy of information, and maintaining editorial standards. Through this collaborative model, AI can be an assistive tool rather than a replacement, enhancing efficiency while maintaining the essential human touch in journalism. References^ reportedly (www.smh.com.au)^ internal email (www.dw.com)^ chief executive wrote (qz.com)^ denied (cointelegraph.com)^ unwittingly submitted (www.bbc.com)^ suing OpenAI (www.forbes.com)^ create entire fake articles (www.theguardian.com)^ Artificial intelligence can discriminate on the basis of race and gender, and also age (theconversation.com)^ study published in March (www.mdpi.com)^ another study (arxiv.org)^ respond to the trolley problem (www.nature.com)^ AI tools are generating convincing misinformation. Engaging with them means being on high alert (theconversation.com)

Read more https://theconversation.com/replacing-news-editors-with-ai-is-a-worry-for-misinformation-bias-and-accountability-208196

Times Magazine

Can bigger-is-better ‘scaling laws’ keep AI improving forever? History says we can’t be too sure

OpenAI chief executive Sam Altman – perhaps the most prominent face of the artificial intellig...

A backlash against AI imagery in ads may have begun as brands promote ‘human-made’

In a wave of new ads, brands like Heineken, Polaroid and Cadbury have started hating on artifici...

Home batteries now four times the size as new installers enter the market

Australians are investing in larger home battery set ups than ever before with data showing the ...

Q&A with Freya Alexander – the young artist transforming co-working spaces into creative galleries

As the current Artist in Residence at Hub Australia, Freya Alexander is bringing colour and creativi...

This Christmas, Give the Navman Gift That Never Stops Giving – Safety

Protect your loved one’s drives with a Navman Dash Cam.  This Christmas don’t just give – prote...

Yoto now available in Kmart and The Memo, bringing screen-free storytelling to Australian families

Yoto, the kids’ audio platform inspiring creativity and imagination around the world, has launched i...

The Times Features

Here’s what new debt-to-income home loan caps mean for banks and borrowers

For the first time ever, the Australian banking regulator has announced it will impose new debt-...

Why the Mortgage Industry Needs More Women (And What We're Actually Doing About It)

I've been in fintech and the mortgage industry for about a year and a half now. My background is i...

Inflation jumps in October, adding to pressure on government to make budget savings

Annual inflation rose[1] to a 16-month high of 3.8% in October, adding to pressure on the govern...

Transforming Addiction Treatment Marketing Across Australasia & Southeast Asia

In a competitive and highly regulated space like addiction treatment, standing out online is no sm...

Aiper Scuba X1 Robotic Pool Cleaner Review: Powerful Cleaning, Smart Design

If you’re anything like me, the dream is a pool that always looks swimmable without you having to ha...

YepAI Emerges as AI Dark Horse, Launches V3 SuperAgent to Revolutionize E-commerce

November 24, 2025 – YepAI today announced the launch of its V3 SuperAgent, an enhanced AI platf...

What SMEs Should Look For When Choosing a Shared Office in 2026

Small and medium-sized enterprises remain the backbone of Australia’s economy. As of mid-2024, sma...

Anthony Albanese Probably Won’t Lead Labor Into the Next Federal Election — So Who Will?

As Australia edges closer to the next federal election, a quiet but unmistakable shift is rippli...

Top doctors tip into AI medtech capital raise a second time as Aussie start up expands globally

Medow Health AI, an Australian start up developing AI native tools for specialist doctors to  auto...