The Times Australia
The Times World News

.

AI can be a powerful tool for scientists. But it can also fuel research misconduct

  • Written by Jon Whittle, Director, Data61, CSIRO

In February this year, Google announced[1] it was launching “a new AI system for scientists”. It said this system was a collaborative tool designed to help scientists “in creating novel hypotheses and research plans”.

It’s too early to tell just how useful this particular tool will be to scientists. But what is clear is that artificial intelligence (AI) more generally is already transforming science.

Last year for example, computer scientists won the Nobel Prize for Chemistry for developing an AI model to predict the shape of every protein known to mankind. Chair of the Nobel Committee, Heiner Linke, described the AI system[2] as the achievement of a “50-year-old dream” that solved a notoriously difficult problem eluding scientists since the 1970s.

But while AI is allowing scientists to make technological breakthroughs that are otherwise decades away or out of reach entirely, there’s also a darker side to the use of AI in science: scientific misconduct is on the rise.

AI makes it easy to fabricate research

Academic papers can be retracted if their data or findings are found to no longer valid. This can happen because of data fabrication, plagiarism or human error.

Paper retractions are increasing exponentially[3], passing 10,000 in 2023. These retracted papers were cited over 35,000 times.

One study[4] found 8% of Dutch scientists admitted to serious research fraud, double the rate previously reported. Biomedical paper retractions have quadrupled in the past 20 years[5], the majority due to misconduct.

AI has the potential to make this problem even worse.

For example, the availability and increasing capability of generative AI programs such as ChatGPT makes it easy to fabricate research.

This was clearly demonstrated by two researchers who used AI to generate 288 complete fake academic finance papers[6] predicting stock returns.

While this was an experiment to show what’s possible, it’s not hard to imagine how the technology could be used[7] to generate fictitious clinical trial data, modify gene editing experimental data to conceal adverse results or for other malicious purposes.

Fake references and fabricated data

There are already many reported cases[8] of AI-generated papers passing peer-review and reaching publication – only to be retracted later on the grounds of undisclosed use of AI, some including serious flaws such as fake references and purposely fabricated data.

Some researchers are also using AI to review their peers’ work. Peer review of scientific papers is one of the fundamentals of scientific integrity. But it’s also incredibly time-consuming, with some scientists devoting hundreds of hours a year of unpaid labour. A Stanford-led study[9] found that up to 17% of peer reviews for top AI conferences were written at least in part by AI.

In the extreme case, AI may end up writing research papers, which are then reviewed by another AI.

This risk is worsening the already problematic trend of an exponential increase[10] in scientific publishing, while the average amount of genuinely new and interesting material in each paper has been declining[11].

AI can also lead to unintentional fabrication of scientific results.

A well-known problem of generative AI systems is when they make up an answer rather than saying they don’t know. This is known as “hallucination”.

We don’t know the extent to which AI hallucinations end up as errors in scientific papers. But a recent study[12] on computer programming found that 52% of AI-generated answers to coding questions contained errors, and human oversight failed to correct them 39% of the time.

Young researchers wearing white coats and face masks at work in a scientific laboratory.
AI is allowing scientists to make technological breakthroughs that are otherwise decades away or out of reach entirely. But it also comes with risks. MikeDotta/Shutterstock[13]

Maximising the benefits, minimising the risks

Despite these worrying developments, we shouldn’t get carried away and discourage or even chastise the use of AI by scientists.

AI offers significant benefits to science. Researchers have used specialised AI models to solve scientific problems for many years. And generative AI models such as ChatGPT offer the promise of general-purpose AI scientific assistants that can carry out a range of tasks, working collaboratively with the scientist.

These AI models can be powerful lab assistants[14]. For example, researchers at CSIRO are already developing AI lab robots that scientists can speak with and instruct like a human assistant to automate repetitive tasks.

A disruptive new technology will always have benefits and drawbacks. The challenge of the science community is to put appropriate policies and guardrails in place to ensure we maximise the benefits and minimise the risks.

AI’s potential to change the world of science and to help science make the world a better place is already proven. We now have a choice.

Do we embrace AI by advocating for and developing an AI code of conduct that enforces ethical and responsible use of AI in science? Or do we take a backseat and let a relatively small number of rogue actors discredit our fields and make us miss the opportunity?

References

  1. ^ Google announced (blog.google)
  2. ^ described the AI system (www.nobelprize.org)
  3. ^ Paper retractions are increasing exponentially (www.nature.com)
  4. ^ One study (www.science.org)
  5. ^ Biomedical paper retractions have quadrupled in the past 20 years (www.nature.com)
  6. ^ generate 288 complete fake academic finance papers (papers.ssrn.com)
  7. ^ could be used (pmc.ncbi.nlm.nih.gov)
  8. ^ many reported cases (www.nature.com)
  9. ^ Stanford-led study (dl.acm.org)
  10. ^ exponential increase (arxiv.org)
  11. ^ has been declining (www.nature.com)
  12. ^ recent study (arxiv.org)
  13. ^ MikeDotta/Shutterstock (www.shutterstock.com)
  14. ^ powerful lab assistants (www.youtube.com)

Read more https://theconversation.com/ai-can-be-a-powerful-tool-for-scientists-but-it-can-also-fuel-research-misconduct-246410

Times Magazine

Choosing the Right Legal Aid Lawyer in Sutherland Shire: Key Considerations

Legal aid services play an essential role in ensuring access to justice for all. For people in the Sutherland Shire who may not have the financial means to pay for private legal assistance, legal aid ensures that everyone has access to representa...

Watercolor vs. Oil vs. Digital: Which Medium Fits Your Pet's Personality?

When it comes to immortalizing your pet’s unique personality in art, choosing the right medium is essential. Each artistic medium, whether watercolor, oil, or digital, has distinct qualities that can bring out the spirit of your furry friend in dif...

DIY Is In: How Aussie Parents Are Redefining Birthday Parties

When planning his daughter’s birthday, Rich opted for a DIY approach, inspired by her love for drawing maps and giving clues. Their weekend tradition of hiding treats at home sparked the idea, and with a pirate ship playground already chosen as t...

When Touchscreens Turn Temperamental: What to Do Before You Panic

When your touchscreen starts acting up, ignoring taps, registering phantom touches, or freezing entirely, it can feel like your entire setup is falling apart. Before you rush to replace the device, it’s worth taking a deep breath and exploring what c...

Why Social Media Marketing Matters for Businesses in Australia

Today social media is a big part of daily life. All over Australia people use Facebook, Instagram, TikTok , LinkedIn and Twitter to stay connected, share updates and find new ideas. For businesses this means a great chance to reach new customers and...

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

The Times Features

What Is the Australian Government First Home Buyers Scheme About?

For many Australians, buying a first home can feel like a daunting task—especially with rising property prices, tight lending rules, and the challenge of saving for a deposit. ...

How artificial intelligence is reshaping the Australian business loan journey

The 2025 backdrop: money is moving differently If you run a small or medium-sized business in Australia, 2025 feels noticeably different. After two years of stubbornly high bo...

Top Features of Energy‑Efficient Air Conditioners for Australian Homes

In recent years, energy efficiency has become more than just a buzzword for Australian households—it’s a necessity. With energy prices rising and climate change driving hotter su...

Long COVID is more than fatigue. Our new study suggests its impact is similar to a stroke or Parkinson’s

When most people think of COVID now, they picture a short illness like a cold – a few days of fever, sore throat or cough before getting better. But for many, the story does...

What Makes Certain Rings or Earrings Timeless Versus Trendy?

Timeless rings and earrings are defined by designs that withstand the test of time, quality craftsmanship, and versatility. Trendy pieces, on the other hand, often stand testimony ...

Italian Street Kitchen: A Nation’s Favourite with Expansion News on Horizon

Successful chef brothers, Enrico and Giulio Marchese, weigh in on their day-to-day at Australian foodie favourite, Italian Street Kitchen - with plans for ‘ambitious expansion’ to ...