Where is the evidence for ERA? Time's up for Australia's research evaluation system
- Written by Ksenia Sawczak, Head, Research and Development, Faculty of Arts and Social Sciences, University of Sydney
Research at Australian universities has been scrutinised through the Australian Research Council’s (ARC) assessment exercise, Excellence in Research for Australia[1], since 2010.
A companion Engagement and Impact Assessment[2] exercise began in 2018. The time and costs for universities[3] of running these exercises (the ARC collected this information when ERA began but never released it) and the value they generate for universities, government, industry and the public are unknown.
It’s difficult to see how any future versions can be justified without evidence of a healthy return on investment.
Read more: Starting next year, universities have to prove their research has real-world impact[4]
The question of future assessment exercises is now in the spotlight. The ARC recently completed a review of ERA and EIA[5] to “ensure the national research assessments address Australia’s future needs”.
The review’s terms of reference[6] included consideration of “the purpose and value of research evaluation, including how it can further contribute to the Government’s science, research and innovation agendas”. This is important, as no evidence has ever been provided of exactly how the government, industry or community uses assessments for informing agendas.
The review received 112 submissions[7] in response to a consultation paper[8]. Most came from universities, peak bodies/associations and various service providers and consultants. No responses were received from the sectors that supposedly benefit from these exercises, namely government, industry and the community.
Read more: Who cares about university research? The answer depends on its impacts[9]
What are the issues with the system?
A review advisory committee was then appointed to consider key issues and make recommendations to the ARC CEO. The committee readily identified key concerns about how the assessments work, such as rating scales, streamlining and automation, evaluation cycles and eligibility requirements. These matters also came up in university submissions.
But what came through most clearly from universities were the mixed views about the value of assessments as a whole. By extension, there is a question mark over whether they should continue if their utility cannot be clearly demonstrated.
While EIA has been run only once, there have now been four rounds of ERA overseen by four different ministers. Each round has culminated in a detailed national report with a minister’s foreword that consistently focuses on the same two matters:
- ERA results provide assurance of the government’s investment in the research sector
- the results will inform and guide future strategies and investments.
In other words, there has been an overreaching focus on justification for the exercise and on its purported utility. But how convincing is this?
Read more: Explainer: how and why is research assessed?[10]
ERA is past its use-by date
In its early days, ERA was credited with playing an important role in focusing university efforts on lifting research performance. Indeed, a number of university submissions to the review acknowledged this.
However, much has changed since then. As university responses noted, new databases and digital tools – together with greater expertise in data analytics within universities to analyse performance – as well as the impact of international benchmarking through university and subject rankings have meant ERA’s influence has dramatically dwindled. Universities no longer need an outdated assessment exercise[11] to tell them how they are performing.
As for its actual application, there was a brief time when ERA informed funding allocations under the Sustainable Research Excellence for Universities scheme[12]. It was one of a number of schemes through which government support for university research was based on their performance. But this was quickly abandoned.
Wayback Machine archives[13]In 2015, with a clear focus on incentivising performance and simplifying funding, the government introduced revised research block grants[14]. In the process, it overlooked the very exercise that identifies research excellence and so ought to inform performance-based funding.
Since then, the best the government has been able to come up with is adding national benchmarking standards for research to the Higher Education Standards Framework[15]. But with the bar set so low and no apparent reward for institutions that perform well above the required standards, barely an eyelid has been batted over this change.
‘Informing’ without evidence of use
Returning to the review committee, its final report[16] of June 2021 acknowledged the vision for and objectives of ERA required rethinking, as these had lost their relevance or failed. This included the objectives of providing a stocktake of Australian research and identifying emerging research areas and opportunities for development.
But the committee has danced around the issue of ERA’s utility. It issued a lofty vision statement[17]:
“that rigorous and transparent research assessment informs and promotes Australian universities’ pursuit of research that is excellent, engaged with community, industry and government, and delivers social, economic, environmental and cultural impact.”
The ARC has adopted it as part of the ERA and EI Action Plan[18].
The notion of “informing” as a buzzword for influence and utility has been the consistent feature of ERA. It seems this will continue. The review committee’s report contains over 50 references to this idea. And “informing decisions” is to be one of the four objectives taken up by the ARC, specifically to “provide a rich and robust source of information on university excellence and activity to inform and support the needs of university, industry, government and community stakeholders”.
But no evidence has ever been provided of ERA’s usefulness to these sectors. This objective rings hollow, particularly in light of the conspicuous absence of industry or government responses to the review.
Read more: Unis want research shared widely. So why don't they properly back academics to do it?[19]
Shutterstock[20]The vanishing link to funding
Of course, the really big question is whether ERA and EI will ever inform research funding. That’s something the ARC has brought up over the years, and possibly the only reason why universities are so compliant.
Curiously, though, the review’s terms of reference did not cover this issue. Perhaps, after 11 years, no one can work this out. Now that would surely represent a very poor return on investment.
References
- ^ Excellence in Research for Australia (www.arc.gov.au)
- ^ Engagement and Impact Assessment (www.arc.gov.au)
- ^ time and costs for universities (blogs.lse.ac.uk)
- ^ Starting next year, universities have to prove their research has real-world impact (theconversation.com)
- ^ review of ERA and EIA (www.arc.gov.au)
- ^ terms of reference (online.flippingbook.com)
- ^ 112 submissions (www.arc.gov.au)
- ^ consultation paper (online.flippingbook.com)
- ^ Who cares about university research? The answer depends on its impacts (theconversation.com)
- ^ Explainer: how and why is research assessed? (theconversation.com)
- ^ outdated assessment exercise (melbourne-cshe.unimelb.edu.au)
- ^ Sustainable Research Excellence for Universities scheme (www.dese.gov.au)
- ^ Wayback Machine archives (web.archive.org)
- ^ research block grants (www.dese.gov.au)
- ^ Higher Education Standards Framework (www.teqsa.gov.au)
- ^ final report (www.arc.gov.au)
- ^ vision statement (www.arc.gov.au)
- ^ ERA and EI Action Plan (online.flippingbook.com)
- ^ Unis want research shared widely. So why don't they properly back academics to do it? (theconversation.com)
- ^ Shutterstock (www.shutterstock.com)