- Related consultation
- Submission received
-
Name (Individual/Organisation)
Adrian Barnett
Responses
Q8. With respect to ERA and EI:
(a) Do you believe there is a need for a highly rigorous, retrospective excellence and impact assessment exercise, particularly in the absence of a link to funding?
(b) What other evaluation measures or approaches (e.g. data driven approaches) could be deployed to inform research standards and future academic capability that are relevant to all disciplines, without increasing the administrative burden?
(c) Should the ARC Act be amended to reference a research quality, engagement and impact assessment function, however conducted?
(d) If so, should that reference include the function of developing new methods in research assessment and keeping up with best practice and global insights?
A retrospective assessment of excellence and impact could be useful for reducing research waste and encouraging high-quality research.
Data-driven approaches are relatively cheap and could replicate the previous ERA benchmarks for a lower cost [1]. However, counts of publications and citations are a poor proxy of research quality. These easy to measure numbers do not measure what really matters, which is the benefits that research creates for people, society, and the economy. Citations have also been gamed by researchers and journals. Counts of publications and citations are a perfect example of Goodhart's Law: when a measure becomes a target, it ceases to be a good measure.
Measuring excellence in research is complex and can only be achieved by expert reviewers.
An ERA system with a low administrative burden that would collect valuable data is to randomly audit a small percentage of funding grants and/or publications for each Australian university and research institute. Funded grants and/or publications that were randomly selected would be assigned expert reviewers from that field. These experts would review the outcomes of funded work and examine how they made an impact in a wide range of areas, including industry, the economy, society, and knowledge. This audit would also reveal projects that had failed to deliver, such as those that did not create any reports or publications.
Random samples cannot be gamed by universities, as they do not select what research they put forward. Rather universities would need to consider the quality of all the research they produce. A large enough sample would be a fair and accurate reflection of a university’s research quality. This approach would identify where our universities were truly excellent and where they need to improve.
By examining the impact of the work this audit would focus on research quality over quantity. Reviewers would not consider poor proxies for quality, such as the journal impact factor. Previous systems of research excellence have either relied on poor proxies (such as citations and journal rankings) or reviewed too much information, leading to reviewer burn-out and unreliable decisions. A random sample allows a relatively small amount of research to be reviewed in detail.
Audits require funding for the expert reviewers, but have no administrative burden for universities and researchers. Australia currently spends almost nothing on the quality control of research. Random audits are a widely used quality control measure in multiple other industries, including agriculture, food production and retail, as they are a proven method for ensuring high standards.
Our group’s research has estimated that auditing just 2% of research outputs would be sufficient to encourage good research practice across the entire system [2].
References:
1. COKI (2022) Automating Era Benchmarks System Technical Report.
2. Barnett AG, Zardo P, Graves N (2018) Randomly auditing research labs could be an affordable way to improve research quality: A simulation study. PLoS ONE 13(4): e0195613.
Submission received
14 December 2022
Publishing statement
Yes, I would like my submission to be published and my name and/or the name of the organisation to be published alongside the submission. Your submission will need to meet government accessibility requirements.