The Times Australia
The Times World News

.

AI will soon be able to audit all published research – what will that mean for public trust in science?

  • Written by Alexander Kaurov, PhD Candidate in Science and Society, Te Herenga Waka — Victoria University of Wellington

Self-correction is fundamental to science. One of its most important forms is peer review[1], when anonymous experts scrutinise research before it is published. This helps safeguard the accuracy of the written record.

Yet problems slip through. A range of grassroots and institutional initiatives work to identify problematic papers, strengthen the peer-review process, and clean up the scientific record through retractions or journal closures. But these efforts are imperfect and resource intensive.

Soon, artificial intelligence (AI) will be able to supercharge these efforts. What might that mean for public trust in science?

Peer review isn’t catching everything

In recent decades, the digital age and disciplinary diversification have sparked an explosion in the number of scientific papers being published, the number of journals in existence, and the influence of for-profit publishing[2].

This has opened the doors for exploitation. Opportunistic “paper mills[3]” sell quick publication with minimal review to academics desperate for credentials, while publishers generate substantial profits through huge article-processing fees[4].

Corporations have also seized the opportunity to fund low-quality research and ghostwrite papers intended to distort the weight of evidence, influence public policy and alter public opinion in favour of their products.

These ongoing challenges highlight the insufficiency of peer review as the primary guardian of scientific reliability. In response, efforts have sprung up to bolster the integrity of the scientific enterprise.

Retraction Watch[5] actively tracks withdrawn papers and other academic misconduct. Academic sleuths and initiatives such as Data Collada[6] identify manipulated data and figures.

Investigative journalists expose corporate influence. A new field of meta-science (science of science) attempts to measure the processes of science and to uncover biases and flaws.

Not all bad science has a major impact, but some certainly does. It doesn’t just stay within academia; it often seeps into public understanding and policy.

In a recent investigation[7], we examined a widely-cited safety review of the herbicide glyphosate, which appeared to be independent and comprehensive. In reality, documents produced during legal proceedings against Monsanto[8] revealed that the paper had been ghostwritten[9] by Monsanto employees and published in a journal with ties to the tobacco industry[10].

Even after this was exposed, the paper continued to shape citations, policy documents and Wikipedia pages worldwide.

When problems like this are uncovered, they can make their way into public conversations, where they are not necessarily perceived as triumphant acts of self-correction[11]. Rather, they may be taken as proof that something is rotten in the state of science. This “science is broken[12]” narrative undermines public trust[13].

A neural network comes out of the top of an ivory tower, above a crowd of people's heads. Some of them are reaching up to try and take some control and pull the net down to them. Watercolour illustration.
Scientists know that a lot of scientific work is inconsequential, but the public may interpret this differently. Jamillah Knowles & We and AI, CC BY-SA[14][15]

AI is already helping police the literature

Until recently, technological assistance in self-correction was mostly limited to plagiarism detectors. But things are changing. Machine-learning services such as ImageTwin[16] and Proofig[17] now scan millions of figures for signs of duplication, manipulation and AI generation.

Natural language processing tools flag “tortured phrases[18]” – the telltale word salads of paper mills. Bibliometric dashboards such as one by Semantic Scholar[19] trace whether papers are cited in support or contradiction.

AI – especially agentic, reasoning-capable models increasingly proficient in mathematics[20] and logic – will soon uncover more subtle flaws.

For example, the Black Spatula Project[21] explores the ability of the latest AI models to check published mathematical proofs at scale, automatically identifying algebraic inconsistencies that eluded human reviewers. Our own work mentioned above also substantially relies on large language models to process large volumes of text.

Given full-text access and sufficient computing power, these systems could soon enable a global audit of the scholarly record. A comprehensive audit will likely find some outright fraud and a much larger mass of routine, journeyman work with garden-variety errors.

We do not know yet how prevalent fraud is, but what we do know is that an awful lot of scientific work is inconsequential. Scientists know this; it’s much discussed that a good deal of published work is never[22] or very rarely cited[23].

To outsiders, this revelation may be as jarring as uncovering fraud, because it collides with the image of dramatic, heroic scientific discovery that populates university press releases and trade press treatments.

What might give this audit added weight is its AI author, which may be seen as (and may in fact be) impartial and competent, and therefore reliable.

As a result, these findings will be vulnerable to exploitation in disinformation campaigns, particularly since AI is already being used to that end[24].

Reframing the scientific ideal

Safeguarding public trust requires redefining the scientist’s role in more transparent, realistic terms. Much of today’s research is incremental, career‑sustaining work rooted in education, mentorship and public engagement.

If we are to be honest with ourselves and with the public, we must abandon the incentives that pressure universities and scientific publishers, as well as scientists themselves, to exaggerate the significance of their work[25]. Truly ground-breaking work is rare. But that does not render the rest of scientific work useless.

A more humble[26] and honest portrayal of the scientist[27] as a contributor to a collective, evolving understanding will be more robust to AI-driven scrutiny than the myth of science as a parade of individual breakthroughs.

A sweeping, cross-disciplinary audit is on the horizon. It could come from a government watchdog, a think tank, an anti-science group or a corporation seeking to undermine public trust in science.

Scientists can already anticipate what it will reveal. If the scientific community prepares for the findings – or better still, takes the lead – the audit could inspire a disciplined renewal. But if we delay, the cracks it uncovers may be misinterpreted as fractures in the scientific enterprise itself.

Science has never derived its strength from infallibility. Its credibility lies in the willingness to correct and repair. We must now demonstrate that willingness publicly, before trust is broken.

References

  1. ^ peer review (theconversation.com)
  2. ^ for-profit publishing (doi.org)
  3. ^ paper mills (theconversation.com)
  4. ^ huge article-processing fees (arxiv.org)
  5. ^ Retraction Watch (retractionwatch.com)
  6. ^ Data Collada (datacolada.org)
  7. ^ investigation (doi.org)
  8. ^ documents produced during legal proceedings against Monsanto (www.wisnerbaum.com)
  9. ^ revealed that the paper had been ghostwritten (pubmed.ncbi.nlm.nih.gov)
  10. ^ ties to the tobacco industry (link.springer.com)
  11. ^ they are not necessarily perceived as triumphant acts of self-correction (www.nature.com)
  12. ^ science is broken (doi.org)
  13. ^ undermines public trust (doi.org)
  14. ^ Jamillah Knowles & We and AI (betterimagesofai.org)
  15. ^ CC BY-SA (creativecommons.org)
  16. ^ ImageTwin (imagetwin.ai)
  17. ^ Proofig (www.proofig.com)
  18. ^ tortured phrases (www.nature.com)
  19. ^ Semantic Scholar (www.semanticscholar.org)
  20. ^ proficient in mathematics (www.reuters.com)
  21. ^ Black Spatula Project (the-black-spatula-project.github.io)
  22. ^ published work is never (www.nature.com)
  23. ^ very rarely cited (www.science.org)
  24. ^ AI is already being used to that end (www.theguardian.com)
  25. ^ exaggerate the significance of their work (dx.doi.org)
  26. ^ humble (www.scientificamerican.com)
  27. ^ honest portrayal of the scientist (doi.org)

Read more https://theconversation.com/ai-will-soon-be-able-to-audit-all-published-research-what-will-that-mean-for-public-trust-in-science-261363

Times Magazine

DIY Is In: How Aussie Parents Are Redefining Birthday Parties

When planning his daughter’s birthday, Rich opted for a DIY approach, inspired by her love for drawing maps and giving clues. Their weekend tradition of hiding treats at home sparked the idea, and with a pirate ship playground already chosen as t...

When Touchscreens Turn Temperamental: What to Do Before You Panic

When your touchscreen starts acting up, ignoring taps, registering phantom touches, or freezing entirely, it can feel like your entire setup is falling apart. Before you rush to replace the device, it’s worth taking a deep breath and exploring what c...

Why Social Media Marketing Matters for Businesses in Australia

Today social media is a big part of daily life. All over Australia people use Facebook, Instagram, TikTok , LinkedIn and Twitter to stay connected, share updates and find new ideas. For businesses this means a great chance to reach new customers and...

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

Data Management Isn't Just About Tech—Here’s Why It’s a Human Problem Too

Photo by Kevin Kuby Manuel O. Diaz Jr.We live in a world drowning in data. Every click, swipe, medical scan, and financial transaction generates information, so much that managing it all has become one of the biggest challenges of our digital age. Bu...

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Times Features

What Makes Certain Rings or Earrings Timeless Versus Trendy?

Timeless rings and earrings are defined by designs that withstand the test of time, quality craftsmanship, and versatility. Trendy pieces, on the other hand, often stand testimony ...

Italian Street Kitchen: A Nation’s Favourite with Expansion News on Horizon

Successful chef brothers, Enrico and Giulio Marchese, weigh in on their day-to-day at Australian foodie favourite, Italian Street Kitchen - with plans for ‘ambitious expansion’ to ...

What to Expect During a Professional Termite Inspection

Keeping a home safe from termites isn't just about peace of mind—it’s a vital investment in the structure of your property. A professional termite inspection is your first line o...

Booty and the Beasts - The Podcast

Cult TV Show Back with Bite as a Riotous New Podcast  The show that scandalised, shocked and entertained audiences across the country, ‘Beauty and the Beast’, has returned in ...

A Guide to Determining the Right Time for a Switchboard Replacement

At the centre of every property’s electrical system is the switchboard – a component that doesn’t get much attention until problems arise. This essential unit directs electrici...

Après Skrew: Peanut Butter Whiskey Turns Australia’s Winter Parties Upside Down

This August, winter in Australia is about to get a lot nuttier. Skrewball Whiskey, the cult U.S. peanut butter whiskey that’s taken the world by storm, is bringing its bold brand o...