- Related consultation
- Submission received
-
Name (Individual/Organisation)
Anonymous #35
Responses
Q5. Please provide suggestions on how the ARC, researchers and universities can better preserve and strengthen the social licence for public funding of research?
The purpose of the NIT statement is not clear. If it is intended to demonstrate the value of the research beyond academia, it should be expanded into a full “pathways to impact” section and impact plan. As it stands, many successful NIT statements (particularly within the Discovery scheme) contain unsubstantiated claims, and there is no mechanism for evaluation.
If, on the other hand, the NIT is intended to supply the equivalent of a press release, it should not inform any consideration of whether or not the research itself is worth funding. If a plain-English summary is required for communications and promotional purposes, this requirement should apply only to projects that have already been selected for funding, perhaps as a condition of releasing the funds.
Q8. With respect to ERA and EI:
(a) Do you believe there is a need for a highly rigorous, retrospective excellence and impact assessment exercise, particularly in the absence of a link to funding?
(b) What other evaluation measures or approaches (e.g. data driven approaches) could be deployed to inform research standards and future academic capability that are relevant to all disciplines, without increasing the administrative burden?
(c) Should the ARC Act be amended to reference a research quality, engagement and impact assessment function, however conducted?
(d) If so, should that reference include the function of developing new methods in research assessment and keeping up with best practice and global insights?
a) We believe that it should be linked to funding, either as a determinant of block grant distribution (as in the UK), or administered via the Competitive Grants Program. It is appropriate for the ARC to require a demonstrated return on investment, but in order to do so, a direct link to the investment should be clearly established. This would build in evaluation and accountability, without the need for a separate “assessment exercise” as such.
b) There is no reliable quantitative measure of relative “quality”. Citation counts are problematic,* even when heavily moderated, and do not apply across all disciplines. Rather than relying on bibliometrics, we might start from the premise that peer-review already serves the function of quality control within the academy. It could in fact be suggested that easing the quantitative pressure (numbers of outputs, numbers of citations) might result in publications - and indeed research - of higher quality.
The success of a given project or program should be evaluated on its own terms. That is, researchers determine the relevant success measures, whether they relate to impact (money saved, injuries prevented, policy reformed, product improved) or the pure pursuit of knowledge (hypothesis proved, text deciphered, galaxy mapped). Measures should be achievable and sustainable, and the success of the project assessed against the extent to which it was able to realise its intended benefits. This could be administered via the NCGP (see Q9 below) using a combination of impact / knowledge transfer plans and systematic, longitudinal post-award evaluation.
c) Yes, the ARC Act should be amended to reference a research quality, engagement and impact function, provided it will be linked in some way to the distribution of Federal funding for research.
d) Yes, the development of research standards should be informed by international practice.
*References:
- Collyer FM. 2018. 'Global patterns in the publishing of academic knowledge: global North, global South'. Current Sociology 66(1), 56-73.
- Davies SW et al. 2021. 'Promoting inclusive metrics of success and impact to dismantle a discriminatory reward system in science'. PLoS Biology 19(6): e3001282.
- Rowlands, I. 2018. 'What are we measuring? Refocusing on some fundamentals in the age of desktop bibliometrics'. FEMS Microbiology Letters 365(8), fny059.
Q9. With respect to the ARC’s capability to evaluate research excellence and impact:
(a) How can the ARC best use its expertise and capability in evaluating the outcomes and benefits of research to demonstrate the ongoing value and excellence of Australian research in different disciplines and/or in response to perceived problems?
(b) What elements would be important so that such a capability could inform potential collaborators and end-users, share best practice, and identify national gaps and opportunities?
(c) Would a data-driven methodology assist in fulfilling this purpose?
a)
RESEARCH EXCELLENCE
Impact can provide one measure of research excellence. If the current ARC grant structure remains in place, the best way to evaluate outcomes is on the basis of individual projects. Proposals could be required to provide a detailed “pathways to impact” plan (Linkage) or knowledge transfer plan (Discovery), including proposed success indicators, and factoring in the time and cost of engagement and evaluation. Open Access costs should also be permitted within the project budget. Interim and final reports should include progress towards KT or impact, for example the engagement activities which have been undertaken. Compulsory follow-up evaluation should be scheduled for 12, 24 and 36 months post-award, with optional follow-up after five and ten years.
The ARC need not take responsibility for evaluating research that has been funded via other programs (NHMRC, Cat. 2 & 3 or international funding). However, universities and/or other funding bodies may be encouraged to follow suit if the ARC pilots an end-to-end ROI evaluation process.
IMPACT
We propose that a public, searchable, structured database should be created detailing the impact and/or KT outcomes of all projects reported by Australian universities. While comparable to EI, there are some significant differences:
• Entries would not be formally scored, ranked, or assessed. One option is allowing end-users to “vote up” entries, and providing the function to search by “favourites”.
• Submission would be annual, providing a snapshot of each project’s impact, whether realised or in progress. It would be compulsory to report on research funded by the ARC, and optional to report on research supported by other funders. There would be no limit on the number of entries per institution.
• Users can decide at any point to make their project/s publicly available, or available only to the ARC.
• Contextual narrative should be structured under standardised headings with an emphasis on concise information and supporting evidence. Infographics, images and links can also be included.
b)
The impact and KT database outlined above would prioritise the following elements:
• Public accountability without the need for assessment or ranking.
• A collaborative, as opposed to competitive approach.
• A rich data source for universities to conduct their own sector-wide analysis and extrapolate best practice.
• A resource that enables potential partners to identify institutional strengths, enables universities and researchers to identify opportunities, and enables the ARC to identify gaps requiring priority funding.
c)
The key word here is “assist”. We cannot rely on citations to measure research quality, as outlined above (Q8). Rich, qualitative, longitudinal data is preferable. Moreover, it is not meaningful to compare metrics across different projects, let alone across different disciplines. Each project should be evaluated on its own terms.
However, systematic impact evaluation might be supported by a business intelligence AI capable of sifting relevant mentions of the research in the public domain. Although not a substitute for self-reporting, this kind of tool could supplement and inform the impact narrative.
The THE and QS rankings already provide internationally recognised evaluation on a range of measures, including the UN SDGs, so these resources could also be utilised.
Submission received
13 December 2022
Publishing statement
Yes, I would like my submission to be published but my and/or the organisation's details kept anonymous. Your submission will need to meet government accessibility requirements.