The Times Australia
The Times World News

.
Times Media

.

AI-assisted writing is quietly booming in academic journals. Here’s why that’s OK

  • Written by Julian Koplin, Lecturer in Bioethics, Monash University & Honorary fellow, Melbourne Law School, Monash University
AI-assisted writing is quietly booming in academic journals. Here’s why that’s OK

If you search Google Scholar for the phrase “as an AI language model[1]”, you’ll find plenty of AI research literature and also some rather suspicious results. For example, one paper[2] on agricultural technology says:

As an AI language model, I don’t have direct access to current research articles or studies. However, I can provide you with an overview of some recent trends and advancements …

Obvious gaffes like this aren’t the only signs that researchers are increasingly turning to generative AI tools when writing up their research. A recent study[3] examined the frequency of certain words in academic writing (such as “commendable”, “meticulously” and “intricate”), and found they became far more common after the launch of ChatGPT – so much so that 1% of all journal articles published in 2023 may have contained AI-generated text.

(Why do AI models overuse these words? There is speculation[4] it’s because they are more common in English as spoken in Nigeria, where key elements of model training often occur.)

The aforementioned study also looks at preliminary data from 2024, which indicates that AI writing assistance is only becoming more common. Is this a crisis for modern scholarship, or a boon for academic productivity?

Who should take credit for AI writing?

Many people are worried by the use of AI in academic papers. Indeed, the practice has been described as “contaminating[5]” scholarly literature.

Some argue that using AI output amounts to plagiarism. If your ideas are copy-pasted from ChatGPT, it is questionable whether you really deserve credit for them.

But there are important differences between “plagiarising” text authored by humans and text authored by AI. Those who plagiarise humans’ work receive credit for ideas that ought to have gone to the original author.

By contrast, it is debatable whether AI systems like ChatGPT can have ideas, let alone deserve credit for them. An AI tool is more like[6] your phone’s autocomplete function than a human researcher.

The question of bias

Another worry is that AI outputs might be biased in ways that could seep into the scholarly record. Infamously, older language models tended to portray[7] people who are female, black and/or gay in distinctly unflattering ways, compared with people who are male, white and/or straight.

This kind of bias is less pronounced[8] in the current version of ChatGPT.

However, other studies have found a different kind[9] of bias[10] in ChatGPT and other large language models[11]: a tendency to reflect a left-liberal political ideology.

Any such bias could subtly distort scholarly writing produced using these tools.

The hallucination problem

The most serious worry relates to a well-known limitation of generative AI systems: that they often make serious mistakes.

For example, when I asked ChatGPT-4 to generate an ASCII image of a mushroom, it provided me with the following output.

   .--'|
   /___^ |     .--.
       ) |    /    \
      / |   |      |
     |   `-._\    /
     \        `~~`
      `-..._____.-`

It then confidently told me I could use this image of a “mushroom” for my own purposes.

These kinds of overconfident mistakes have been referred to as “AI hallucinations[12]” and “AI bullshit[13]”. While it is easy to spot that the above ASCII image looks nothing like a mushroom (and quite a bit like a snail), it may be much harder to identify any mistakes ChatGPT makes when surveying scientific literature[14] or describing the state of a philosophical debate.

Unlike (most) humans, AI systems are fundamentally unconcerned with the truth of what they say. If used carelessly, their hallucinations could corrupt the scholarly record.

Should AI-produced text be banned?

One response to the rise of text generators has been to ban them outright. For example, Science – one of the world’s most influential academic journals – disallows any use of AI-generated text[15].

I see two problems with this approach.

The first problem is a practical one: current tools for detecting AI-generated text are highly unreliable. This includes the detector created by ChatGPT’s own developers, which was taken offline[16] after it was found to have only a 26% accuracy rate (and a 9% false positive rate[17]). Humans also make mistakes[18] when assessing whether something was written by AI.

It is also possible to circumvent AI text detectors. Online communities are actively exploring[19] how to prompt ChatGPT in ways that allow the user to evade detection. Human users can also superficially rewrite AI outputs, effectively scrubbing away the traces of AI (like its overuse of the words “commendable”, “meticulously” and “intricate”).

The second problem is that banning generative AI outright prevents us from realising these technologies’ benefits. Used well, generative AI can boost academic productivity[20] by streamlining the writing process. In this way, it could help further human knowledge. Ideally, we should try to reap these benefits while avoiding the problems.

The problem is poor quality control, not AI

The most serious problem with AI is the risk of introducing unnoticed errors, leading to sloppy scholarship. Instead of banning AI, we should try to ensure that mistaken, implausible or biased claims cannot make it onto the academic record.

After all, humans can also produce writing with serious errors, and mechanisms such as peer review often fail[21] to prevent its publication.

We need to get better at ensuring academic papers are free from serious mistakes, regardless of whether these mistakes are caused by careless use of AI or sloppy human scholarship. Not only is this more achievable than policing AI usage, it will improve the standards of academic research as a whole.

This would be (as ChatGPT might say) a commendable and meticulously intricate solution.

References

  1. ^ as an AI language model (twitter.com)
  2. ^ paper (journals.ekb.eg)
  3. ^ recent study (arxiv.org)
  4. ^ speculation (www.theguardian.com)
  5. ^ contaminating (arxiv.org)
  6. ^ more like (theconversation.com)
  7. ^ tended to portray (arxiv.org)
  8. ^ less pronounced (www.nature.com)
  9. ^ kind (arxiv.org)
  10. ^ bias (www.mdpi.com)
  11. ^ other large language models (www.maximumtruth.org)
  12. ^ AI hallucinations (theconversation.com)
  13. ^ AI bullshit (blog.practicalethics.ox.ac.uk)
  14. ^ surveying scientific literature (time.com)
  15. ^ any use of AI-generated text (www.science.org)
  16. ^ taken offline (arstechnica.com)
  17. ^ 9% false positive rate (openai.com)
  18. ^ make mistakes (www.nature.com)
  19. ^ actively exploring (www.youtube.com)
  20. ^ boost academic productivity (www.nature.com)
  21. ^ often fail (www.routledge.com)

Read more https://theconversation.com/ai-assisted-writing-is-quietly-booming-in-academic-journals-heres-why-thats-ok-229416

The Times Features

The Gift That Keeps Growing: Why Tinybeans+ Gift Cards are a game-changer for new parents

As new parents navigate the joys and challenges of raising a child in the digital age, one question looms large: how do you preserve and share your baby's milestones without co...

Group Adventures Made Easy: How to Coordinate Shuttle Services from DCA to IAD

Traveling as a large group can be both exciting and challenging, especially when navigating busy airports like DCA (Ronald Reagan Washington National Airport) and IAD (Washington...

From Anxiety to Assurance: Proven Strategies to Support Your Child's Emotional Health

Navigating the intricate landscape of childhood emotions can be a daunting task for any parent, especially when faced with common fears and anxieties. However, transforming anxie...

The Rise of Meal Replacement Shakes in Australia: Why The Lady Shake Is Leading the Pack

Source Meal replacement shakes are having a moment in Australia, and it’s not hard to see why. They’re quick, convenient, and packed with nutrition, making them the perfect solu...

HCF’s Healthy Hearts Roadshow Wraps Up 2024 with a Final Regional Sprint

Next week marks the final leg of the HCF Healthy Hearts Roadshow for 2024, bringing free heart health checks to some of NSW’s most vibrant regional communities. As Australia’s ...

The Budget-Friendly Traveler: How Off-Airport Car Hire Can Save You Money

When planning a trip, transportation is one of the most crucial considerations. For many, the go-to option is renting a car at the airport for convenience. But what if we told ...

Times Magazine

The Future of Web Design: Predictions for the Next Decade

As you ponder the ever-evolving landscape of web design, one question lingers: What shifts will redefine the digital realm in the coming decade? The horizon holds promises of AI seamlessly shaping design processes, immersive 3D realms transformin...

Why Is a Website the Most Important Part of Digital Marketing?

Your website is the most important digital marketing tool you have, even more so than social media, content marketing, email marketing, traditional outbound advertising, and printed materials. All of your marketing activities will likely lead users...

Managing Your Online Reputation: Strategies for Removing Negative Content

Maintaining a positive online reputation is crucial for individuals and businesses in today's digital age. However, negative content such as negative reviews, defamatory posts, or outdated information can tarnish your reputation and harm your credi...

Protected Trees in NSW: What You Need to Know

If you live in New South Wales, you might already have heard that some trees are protected by law. Maybe you discovered this when you contacted someone for a quote to remove them, or maybe you learned the hard way by getting a fine for removing a...

Waave launches ‘Wallet’ for Pay by Bank with Australian-first biometric security

Payments technology and Open Banking leader Waave today announces the introduction of the Waave Wallet to house its upgraded Pay by Bank product, a real-time account-to-account payment method which now features industry-leading biometric security...

Story Week, Australia’s performing writer’s festival

THE WORLD’S GREATEST SPOKEN WORD ARTISTS RETURN TO SYDNEY AS STORY WEEK 2022 UNVEILS PROGRAM Australia’s largest performing writer’s program, Word Travel’s Story Week returns from 15 - 23 October. The series of carefully curated events will be...