The Times Australia
Mirvac Harbourside
The Times World News

.

Replacing news editors with AI is a worry for misinformation, bias and accountability

  • Written by Uri Gal, Professor in Business Information Systems, University of Sydney
Replacing news editors with AI is a worry for misinformation, bias and accountability

Germany’s best-selling newspaper, Bild, is reportedly[1] adopting artificial intelligence (AI) to replace certain editorial roles, in an effort to cut costs.

In a leaked internal email[2] sent to staff on June 19, the paper’s publisher, Axel Springer, said it would “unfortunately part with colleagues who have tasks that will be replaced by AI and/or processes in the digital world. The functions of editorial directors, page editors, proofreaders, secretaries, and photo editors will no longer exist as they do today”.

The email follows a February memo in which Axel Springer’s chief executive wrote[3] that the paper would transition to a “purely digital media company”, and that “artificial intelligence has the potential to make independent journalism better than it ever was – or simply replace it”.

Bild has subsequently denied[4] editors will be directly replaced with AI, saying the staff cuts are due to restructuring, and AI will only “support” journalistic work rather than replace it.

Nevertheless, these developments beg the question: how will the main pillars of editorial work – judgement, accuracy, accountability and fairness – fare amid the rising tide of AI?

Entrusting editorial responsibilities to AI, whether now or in the future, carries serious risks, both because of the nature of AI and the importance of the role of newspaper editors.

The importance of editors

Editors hold a position of immense significance in democracies, tasked with selecting, presenting and shaping news stories in a way that informs and engages the public, serving as a crucial link between events and public understanding.

Their role is pivotal in determining what information is prioritised and how it’s framed, thereby guiding public discourse and opinion. Through their curation of news, editors highlight key societal issues, provoke discussion, and encourage civic participation.

They help to ensure government actions are scrutinised and held to account, contributing to the system of checks and balances that’s foundational to a functioning democracy.

What’s more, editors maintain the quality of information delivered to the public by mitigating the propagation of biased viewpoints and limiting the spread of misinformation, which is particularly vital in the current digital age.

AI is highly unreliable

Current AI systems, such as ChatGPT, are incapable of adequately fulfilling editorial roles because they’re highly unreliable when it comes to ensuring the factual accuracy and impartiality of information.

It has been widely reported that ChatGPT can produce believable yet manifestly false information. For instance, a New York lawyer recently unwittingly submitted[5] a brief in court that contained six non-existent judicial decisions which were made up by ChatGPT.

Earlier in June, it was reported that a radio host is suing OpenAI[6] after ChatGPT generated a false legal complaint accusing him of embezzling money.

As a reporter for The Guardian learned earlier this year, ChatGPT can even be used to create entire fake articles[7] later to be passed off as real.

To the extent AI will be used to create, summarise, aggregate or edit text, there’s a risk the output will contain fabricated details.

Inherent biases

AI systems also have inherent biases. Their output is moulded by the data they are trained on, reflecting both the broad spectrum of human knowledge and the inherent biases within the data.

These biases are not immediately evident and can sway public views in subtle yet profound ways.

Read more: Artificial intelligence can discriminate on the basis of race and gender, and also age[8]

In a study published in March[9], a researcher administered 15 political orientation tests to ChatGPT and found that, in 14 of them, the tool provided answers reflecting left-leaning political views.

In another study[10], researchers administered to ChatGPT eight tests reflective of the respective politics of the G7 member states. These tests revealed a bias towards progressive views.

Interestingly, the tool’s progressive inclinations are not consistent and its responses can, at times, reflect more traditional views.

When given the prompt, “I’m writing a book and my main character is a plumber. Suggest ten names for this character”, the tool provides ten male names:

Alt tbc
ChatGPT, Author provided But when given the prompt, “I’m writing a book and my main character is a kindergarten teacher. Suggest ten names for this character”, the tool responds with ten female names: Alt tbc
ChatGPT, Author provided This inconsistency has also been observed in moral situations. When researchers asked ChatGPT to respond to the trolley problem[11] (would you kill one person to save five?), the tool gave contradictory advice, demonstrating shifting ethical priorities. Nonetheless, the human participants’ moral judgements increasingly aligned with the recommendations provided by ChatGPT, even when they knew they were being advised by an AI tool. Lack of accountability The reason for this inconsistency and the manner in which it manifests are unclear. AI systems like ChatGPT are “black boxes”; their internal workings are difficult to fully understand or predict. Therein lies a risk in using them in editorial roles. Unlike a human editor, they cannot explain their decisions or reasoning in a meaningful way. This can be a problem in a field where accountability and transparency are important. While the financial benefits of using AI in editorial roles may seem compelling, news organisations should act with caution. Given the shortcomings of current AI systems, they are unfit to serve as newspaper editors. Read more: AI tools are generating convincing misinformation. Engaging with them means being on high alert[12] However, they may be able to play a valuable role in the editorial process when combined with human oversight. The ability of AI to quickly process vast amounts of data, and automate repetitive tasks, can be leveraged to augment human editors’ capabilities. For instance, AI can be used for grammar checks or trend analysis, freeing up human editors to focus on nuanced decision-making, ethical considerations, and content quality. Human editors must provide necessary oversight to mitigate AI’s shortcomings, ensuring the accuracy of information, and maintaining editorial standards. Through this collaborative model, AI can be an assistive tool rather than a replacement, enhancing efficiency while maintaining the essential human touch in journalism. References^ reportedly (www.smh.com.au)^ internal email (www.dw.com)^ chief executive wrote (qz.com)^ denied (cointelegraph.com)^ unwittingly submitted (www.bbc.com)^ suing OpenAI (www.forbes.com)^ create entire fake articles (www.theguardian.com)^ Artificial intelligence can discriminate on the basis of race and gender, and also age (theconversation.com)^ study published in March (www.mdpi.com)^ another study (arxiv.org)^ respond to the trolley problem (www.nature.com)^ AI tools are generating convincing misinformation. Engaging with them means being on high alert (theconversation.com)

Read more https://theconversation.com/replacing-news-editors-with-ai-is-a-worry-for-misinformation-bias-and-accountability-208196

Mirvac Harbourside

Times Magazine

YepAI Joins Victoria's AI Trade Mission to Singapore for Big Data & AI World Asia 2025

YepAI, a Melbourne-based leader in enterprise artificial intelligence solutions, announced today...

Building a Strong Online Presence with Katoomba Web Design

Katoomba web design is more than just creating a website that looks good—it’s about building an onli...

September Sunset Polo

International Polo Tour To Bridge Historic Sport, Life-Changing Philanthropy, and Breath-Taking Beau...

5 Ways Microsoft Fabric Simplifies Your Data Analytics Workflow

In today's data-driven world, businesses are constantly seeking ways to streamline their data anal...

7 Questions to Ask Before You Sign IT Support Companies in Sydney

Choosing an IT partner can feel like buying an insurance policy you hope you never need. The right c...

Choosing the Right Legal Aid Lawyer in Sutherland Shire: Key Considerations

Legal aid services play an essential role in ensuring access to justice for all. For people in t...

The Times Features

Macquarie Bank Democratises Agentic AI, Scaling Customer Innovation with Gemini Enterprise

Macquarie’s Banking and Financial Services group (Macquarie Bank), in collaboration with Google ...

Do kids really need vitamin supplements?

Walk down the health aisle of any supermarket and you’ll see shelves lined with brightly packa...

Why is it so shameful to have missing or damaged teeth?

When your teeth and gums are in good condition, you might not even notice their impact on your...

Australian travellers at risk of ATM fee rip-offs according to new data from Wise

Wise, the global technology company building the smartest way to spend and manage money internat...

Does ‘fasted’ cardio help you lose weight? Here’s the science

Every few years, the concept of fasted exercise training pops up all over social media. Faste...

How Music and Culture Are Shaping Family Road Trips in Australia

School holiday season is here, and Aussies aren’t just hitting the road - they’re following the musi...

The Role of Spinal Physiotherapy in Recovery and Long-Term Wellbeing

Back pain and spinal conditions are among the most common reasons people seek medical support, oft...

Italian Lamb Ragu Recipe: The Best Ragù di Agnello for Pasta

Ciao! It’s Friday night, and the weekend is calling for a little Italian magic. What’s better than t...

It’s OK to use paracetamol in pregnancy. Here’s what the science says about the link with autism

United States President Donald Trump has urged pregnant women[1] to avoid paracetamol except in ...