The Times Australia
The Times World News

.
Men's Weekly

.

How AI can undermine peer review

  • Written by Timothy Hugh Barker, Senior Research Fellow, School of Public Health, University of Adelaide

Earlier this year I received comments on an academic manuscript of mine as part of the usual peer review process, and noticed something strange.

My research focuses on ensuring trustworthy evidence is used to inform policy, practice and decision making. I often collaborate with groups like the World Health Organization to conduct systematic reviews to inform clinical and public health guidelines or policy. The paper I had submitted for peer review was about systematic review conduct.

What I noticed raised my concerns about the growing role artificial intelligence (AI) is playing in the scientific process.

A service to the community

Peer review is fundamental to academic publishing, ensuring research is rigorously critiqued prior to publication and dissemination. In this process researchers submit their work to a journal where editors invite expert peers to provide feedback. This benefits all involved.

For peer reviewers, it is favourably considered when applying for funding or promotion as it is seen as a service to the community. For researchers, it challenges them to refine their methodologies, clarify their arguments, and address weaknesses to prove their work is publication worthy. For the public, peer review ensures that the findings of research are trustworthy.

Even at first glance the comments I received on my manuscript in January this year seemed odd.

First, the tone was far too uniform and generic. There was also an unexpected lack of nuance, depth or personality. And the reviewer had provided no page or line numbers and no specific examples of what needed to be improved to guide my revisions.

For example, they suggested I “remove redundant explanations”. However, they didn’t indicate which explanations were redundant, or even where they occurred in the manuscript.

They also suggested I order my reference list in a bizarre manner which disregarded the journal requirements and followed no format that I have seen replicated in a scientific journal. They provided comments pertaining to subheadings that didn’t exist.

And although the journal required no “discussion” section, the peer reviewer had provided the following suggestion to improve my non-existent discussion: “Addressing future directions for further refinement of [the content of the paper] would enhance the paper’s forward-looking perspective”.

AI chatbot open on a smartphone, next to a laptop, headphones and notebook.
The output from ChatGPT about the manuscript was similar to the comments from a peer reviewer. Diego Thomazini/Shutterstock[1]

Testing my suspicions

To test my suspicions the review was, at least in part, written by AI, I uploaded my own manuscript to three AI models – ChatGPT-4o, Gemini 1.5Pro and DeepSeek-V3. I then compared comments from the peer review with the models’ output.

For example, the comment from the peer reviewer regarding the abstract read:

Briefly address the broader implications of [main output of paper] for systematic review outcomes to emphasise its importance.

The output from ChatGPT-4o regarding the abstract read:

Conclude with a sentence summarising the broader implications or potential impact [main output of paper] on systematic reviews or evidence-based practice.

The comment from the peer reviewer regarding the methods read:

Methodological transparency is commendable, with detailed documentation of the [process we undertook] and the rationale behind changes. Alignment with [gold standard] reporting requirements is a strong point, ensuring compatibility with current best practices.

The output from ChatGPT-4o regarding the methods read:

Clearly describes the process of [process we undertook], ensuring transparency in methodology. Emphasises the alignment of the tool with [gold standard] guidelines, reinforcing methodological rigour.

But the biggest red flag was the difference between the peer-reviewer’s feedback and the feedback of the associate editor of the journal I had submitted my manuscript to. Where the associate editor’s feedback was clear, instructive and helpful, the peer reviewer’s feedback was vague, confusing, and did nothing to improve my work.

I expressed my concerns directly to the editor-in-chief. To their credit, I was met with immediate thanks for flagging the issues and for documenting my investigation – which, they said, was “concerning and revealing”.

A woman sitting at a wooden desk typing on a computer, with a notepad by her side.
The feedback about the manuscript from the journal’s associate editor was clear, instructive and helpful. Mikhail Nilov/Pexels[2]

Careful oversight is needed

I do not have definitive proof the peer review of my manuscript was AI-generated. But the similarities between the comments left by the peer reviewer, and the output from the AI models was striking.

AI models make research faster, easier and more accessible[3]. However, their implementation as a tool to assist in peer review requires careful oversight, with current guidance on AI use in peer review being mixed[4], and its effectiveness unclear[5].

If AI models are to be used in peer review, authors have the right to be informed and given the option to opt out. Reviewers also need to disclose the use of AI in their review. However, the enforcement of this remains an issue and needs to fall to the journals and editors to ensure peer reviewers who use AI models inappropriately are flagged.

I submitted my research for “expert” review by my peers in the field, yet received AI-generated feedback that ultimately failed to improve my work. Had I accepted these comments without question – and if the associate editor had not provided such exemplary feedback – there is every chance this could have gone unnoticed.

My work may have been accepted for publication without being properly scrutinised, disseminated into the public as “fact” corroborated by my peers, despite my peers not actually reviewing this work themselves.

References

  1. ^ Diego Thomazini/Shutterstock (www.shutterstock.com)
  2. ^ Mikhail Nilov/Pexels (www.pexels.com)
  3. ^ AI models make research faster, easier and more accessible (www.nature.com)
  4. ^ mixed (pmc.ncbi.nlm.nih.gov)
  5. ^ unclear (pmc.ncbi.nlm.nih.gov)

Read more https://theconversation.com/vague-confusing-and-did-nothing-to-improve-my-work-how-ai-can-undermine-peer-review-251040

Times Magazine

Effective Commercial Pest Control Solutions for a Safer Workplace

Keeping a workplace clean, safe, and free from pests is essential for maintaining productivity, protecting employee health, and upholding a company's reputation. Pests pose health risks, can cause structural damage, and can lead to serious legal an...

The Science Behind Reverse Osmosis and Why It Matters

What is reverse osmosis? Reverse osmosis (RO) is a water purification process that removes contaminants by forcing water through a semi-permeable membrane. This membrane allows only water molecules to pass through while blocking impurities such as...

Foodbank Queensland celebrates local hero for National Volunteer Week

Stephen Carey is a bit bananas.   He splits his time between his insurance broker business, caring for his young family, and volunteering for Foodbank Queensland one day a week. He’s even run the Bridge to Brisbane in a banana suit to raise mon...

Senior of the Year Nominations Open

The Allan Labor Government is encouraging all Victorians to recognise the valuable contributions of older members of our community by nominating them for the 2025 Victorian Senior of the Year Awards.  Minister for Ageing Ingrid Stitt today annou...

CNC Machining Meets Stage Design - Black Swan State Theatre Company & Tommotek

When artistry meets precision engineering, incredible things happen. That’s exactly what unfolded when Tommotek worked alongside the Black Swan State Theatre Company on several of their innovative stage productions. With tight deadlines and intrica...

Uniden Baby Video Monitor Review

Uniden has released another award-winning product as part of their ‘Baby Watch’ series. The BW4501 Baby Monitor is an easy to use camera for keeping eyes and ears on your little one. The camera is easy to set up and can be mounted to the wall or a...

The Times Features

Running Across Australia: What Really Holds the Body Together?

How William Goodge’s 3,800km run reveals the connection between movement, mindset, and mental resilience As a business owner, I’ve come to realise that the biggest wins rarely c...

Telehealth is Transforming Healthcare Services in Australia

It has traditionally not been easy to access timely healthcare in Australia, particularly for people who live in remote areas. Many of them spend hours on the road just to see a...

Launchd Acquires Huume, Strengthening Creative Firepower Across Talent-Led Marketing

Launchd, a leader in talent, technology and brand partnerships, has announced its acquisition of influencer talent management agency Huume from IZEA. The move comes as the medi...

Vietnam's "Gold Coast" Emerges as Extraordinary Investment Frontier and Australian Inspired Way of Life

$2 Billion super-city in Vung Tau set to replicate Australia's Gold Coast success story A culturally metamorphic development aptly named "Gold Coast" is set to reshape Vietna...

Choosing the Wrong Agent Is the #1 Regret Among Aussie Property Sellers

Selling your home is often one of the largest financial transactions you’ll make, and for many Australians, it’s also one of the most emotional. A new survey of Australian home se...

Travel Insurance for Families: What Does it Cover and Why it’s Essential

Planning a family trip is exciting, but unexpected mishaps can turn your dream vacation into a stressful ordeal. That’s where travel insurance comes in—it’s your safety net when ...