The Times Australia
Fisher and Paykel Appliances
The Times World News

.

Assessment in the age of AI – unis must do more than tell students what not to do

  • Written by Thomas Corbin, Research fellow, Center for Research in Assessment and Digital Learning, Deakin University

In less than three years, artificial intelligence technology has radically changed the assessment landscape. In this time, universities have taken various approaches, from outright banning the use of generative AI, to allowing it in some circumstances, to allowing AI by default.

But some university teachers and students have reported[1] they remain confused and anxious, unsure about what counts as “appropriate use” of AI. This has been accompanied by concerns AI is facilitating a rise in cheating[2].

There is also a broader question about the value of university degrees[3] today if AI is used in student assessments.

In a new journal article[4], we examine current approaches to AI and assessment and ask: how should universities assess students in the age of AI?

Read more: Researchers created a chatbot to help teach a university law class – but the AI kept messing up[5]

Why ‘assessment validity’ matters

Universities have responded to the emergence of generative AI with various policies aimed at clarifying what is allowed and what is not.

For example, the United Kingdom’s University of Leeds set up a “traffic light[6]” framework of when AI tools can be used in assessment: red means no AI, orange allows limited use, green encourages it.

For example, a “red” light on a traditional essay would indicate to students it should be written without any AI assistance at all. An “amber” marked essay would perhaps allow AI use for “idea generation” but not for writing elements. A “green” light would permit students to use AI in any way they choose.

In order to help ensure students comply with these rules, many institutions, such as the University of Melbourne[7], require students to declare their use of AI in a statement attached to submitted assessments.

The aim in these and similar cases is to preserve “assessment validity[8]”. This refers to whether the assessment is measuring what we think it is measuring. Is it assessing students’ actual capabilities or learning? Or how well they use the AI? Or how much they paid to use it?

But we argue setting clear rules is not enough to maintain assessment validity.

Our paper

In a new peer-reviewed paper[9], we present a conceptual argument for how universities and schools can better approach AI in assessments.

We begin by making the distinction between two approaches to AI and assessment:

  • discursive changes: only modify the instructions or rules around an assessment. To work, they rely on students understanding and voluntarily following directions.

  • structural changes: modify the task itself. These constrain or enable behaviours by design, not by directives.

For example, telling students “you may only use AI to edit your take-home essay” is a discursive change. Changing an assessment task to include a sequence of in-class writing tasks where development is observed over time is a structural change.

Telling a student not to use AI tools when writing computer code is discursive. Developing a live, assessed conversation about the choices a student has made made is structural.

A reliance on changing the rules

In our paper, we argue most university responses to date (including traffic light frameworks and student declarations) have been discursive. They have only changed the rules around what is or isn’t allowed. They haven’t modified the assessments themselves.

We suggest only structural changes can reliably protect validity in a world where AI use means rule-breaking is increasingly undetectable[10].

So we need to change the task

In the age of generative AI, if we want assessments to be valid and fair, we need structural change.

Structural change means designing assessments where validity is embedded in the task itself, not outsourced to rules or student compliance.

This won’t look the same in every discipline and it won’t be easy. In some cases, it may require assessing students in very different ways from the past. But we can’t avoid the challenge by just telling students what to do and hoping for the best.

If assessment is to retain its function as a meaningful claim about student capability, it must be rethought at the level of design.

References

  1. ^ have reported (www.tandfonline.com)
  2. ^ rise in cheating (www.theguardian.com)
  3. ^ value of university degrees (www.theguardian.com)
  4. ^ new journal article (www.tandfonline.com)
  5. ^ Researchers created a chatbot to help teach a university law class – but the AI kept messing up (theconversation.com)
  6. ^ traffic light (generative-ai.leeds.ac.uk)
  7. ^ University of Melbourne (students.unimelb.edu.au)
  8. ^ assessment validity (www.tandfonline.com)
  9. ^ new peer-reviewed paper (www.tandfonline.com)
  10. ^ increasingly undetectable (arxiv.org)

Read more https://theconversation.com/assessment-in-the-age-of-ai-unis-must-do-more-than-tell-students-what-not-to-do-257469

Times Magazine

Australia’s electric vehicle surge — EVs and hybrids hit record levels

Australians are increasingly embracing electric and hybrid cars, with 2025 shaping up as the str...

Tim Ayres on the AI rollout’s looming ‘bumps and glitches’

The federal government released its National AI Strategy[1] this week, confirming it has dropped...

Seven in Ten Australian Workers Say Employers Are Failing to Prepare Them for AI Future

As artificial intelligence (AI) accelerates across industries, a growing number of Australian work...

Mapping for Trucks: More Than Directions, It’s Optimisation

Daniel Antonello, General Manager Oceania, HERE Technologies At the end of June this year, Hampden ...

Can bigger-is-better ‘scaling laws’ keep AI improving forever? History says we can’t be too sure

OpenAI chief executive Sam Altman – perhaps the most prominent face of the artificial intellig...

A backlash against AI imagery in ads may have begun as brands promote ‘human-made’

In a wave of new ads, brands like Heineken, Polaroid and Cadbury have started hating on artifici...

The Times Features

The way Australia produces food is unique. Our updated dietary guidelines have to recognise this

You might know Australia’s dietary guidelines[1] from the famous infographics[2] showing the typ...

Why a Holiday or Short Break in the Noosa Region Is an Ideal Getaway

Few Australian destinations capture the imagination quite like Noosa. With its calm turquoise ba...

How Dynamic Pricing in Accommodation — From Caravan Parks to Hotels — Affects Holiday Affordability

Dynamic pricing has quietly become one of the most influential forces shaping the cost of an Aus...

The rise of chatbot therapists: Why AI cannot replace human care

Some are dubbing AI as the fourth industrial revolution, with the sweeping changes it is propellin...

Australians Can Now Experience The World of Wicked Across Universal Studios Singapore and Resorts World Sentosa

This holiday season, Resorts World Sentosa (RWS), in partnership with Universal Pictures, Sentosa ...

Mineral vs chemical sunscreens? Science shows the difference is smaller than you think

“Mineral-only” sunscreens are making huge inroads[1] into the sunscreen market, driven by fears of “...

Here’s what new debt-to-income home loan caps mean for banks and borrowers

For the first time ever, the Australian banking regulator has announced it will impose new debt-...

Why the Mortgage Industry Needs More Women (And What We're Actually Doing About It)

I've been in fintech and the mortgage industry for about a year and a half now. My background is i...

Inflation jumps in October, adding to pressure on government to make budget savings

Annual inflation rose[1] to a 16-month high of 3.8% in October, adding to pressure on the govern...