Australian Catholic University

Related consultation
Submission received

Name (Individual/Organisation)

Australian Catholic University

Responses

Q1. How could the purpose in the ARC Act be revised to reflect the current and future role of the ARC?

For example, should the ARC Act be amended to specify in legislation:
(a) the scope of research funding supported by the ARC
(b) the balance of Discovery and Linkage research programs
(c) the role of the ARC in actively shaping the research landscape in Australia
(d) any other functions?

If so, what scope, functions and role?

If not, please suggest alternative ways to clarify and define these functions.

Australian Catholic University (ACU) acknowledges that the Australian Research Council (ARC) plays an important role in shaping research in Australia. ACU supports the draft recommendations of Universities Australia including a framework for quality underpinning the national research ecosystem. ACU endorses a data-driven approach within a multidisciplinary framework to ensure quality is measured for success across all priorities.

ACU supports recommendation 1 of the Universities Australia draft submission, but makes an addition below:

“1. That the Government amend the ARC Act to clarify the scope of the research that may be funded by the ARC. The scope should be restricted to non-medical research undertaken by universities and their partner organisations; and have regard to the optimal distribution between basic and applied research”, _as well as regard to the optimal distribution between the STEM and HASS disciplines, and multidisciplinary field of Aboriginal and Torres Strait Islander research_.

The ARC research priorities generally exclude the fields of Humanities and Social Sciences (HASS). The renaming in 2016 of the government’s ‘Strategic Research Priorities’ as ‘Science and Research Priorities’ devalued HASS research, even though creative industries, social research and other fields like education make up significant portions of the Australian economy. Comparable funding bodies internationally all have strategic research aims or programs which align with the Humanities, Social Sciences and Creative Arts. For instance, the UK Arts and Humanities Research Council has five funding focus themes: Connected Communities, Creative industries cluster program, Design research, Heritage research, Hidden histories and Humanities in the European Research Area.
(see https://ahrc.ukri.org/research/fundedthemesandprogrammes/themes/)

Revisions to the scope of research funding should not exclude research activities that have direct benefit to Australians now and in the future. This particularly includes the multidisciplinary research field of Indigenous Studies, which encompasses a holistic approach to health and other research priorities. Pilot alternatives to seed larger grants may be of particular assistance to qualitative researchers. (ACU’s response to Q 7 provides more detail on reintroducing the small grants scheme for this purpose).

Q2. Do you consider the current ARC governance model is adequate for the ARC to perform its functions?

If not, how could governance of the ARC be improved? For example, should the ARC Act be amended to incorporate a new governance model that establishes a Board on the model outlined in the consultation paper, or another model.

Please expand on your reasoning and/or provide alternative suggestions to enhance the governance, if you consider this to be important.

ACU supports recommendations 4 and 5 of the Universities Australia draft submission.

“4. That the ARC Act be amended to introduce an ARC Board as outlined in the consultation paper.
5. That the ARC Act be amended to:
a. Strengthen the role of the CEO by bringing it in line with the NHMRC in relation to the protections relating to performance of the CEO.
b. Consider a strong research track record as a criteria for the position of CEO.”

ACU further recommends that the Board be led by an Australian researcher of international standing. The Board should also include expertise in Indigenous knowledges, HASS and STEM disciplines, research policy, research administration and leadership, and the wider R&D community.

Q3. How could the Act be improved to ensure academic and research expertise is obtained and maintained to support the ARC?

How could this be done without the Act becoming overly prescriptive?

ACU supports recommendation 6 of the Universities Australia draft submission.

“6. That the Government:
a. Undertake a review of the balance of policy, program, administrative and research expertise within the ARC management and leadership to strengthen the level of research expertise.
b. Examine the appropriate organisational structure and funding levels to support the rebalanced workforce.”

Further, ACU recommends the Act set parameters about the Colleges of Experts, which need not be prescriptive, but the Act could specify that the Colleges will consist of appropriately qualified researchers of high standing, representing relevant fields of research, contain a balance of gender and career progression, and be appointed by the Board following a transparent application process.

The College and the Board also needs to consider diverse representation of Aboriginal and Torres Strait Islander researchers. There is a current disproportion of Aboriginal and Torres Strait Islander early and mid-career researchers competing against senior researchers. Nominating one Indigenous member to be on the ARC Board is not representative of Indigenous diversities or voices.

Q4. Should the ARC Act be amended to consolidate the pre-eminence or importance of peer review?

Please provide any specific suggestions you may have for amendment of the Act, and/or for non-legislative measures.

ACU supports recommendation 2 of the Universities Australia draft submission that the ARC Act be amended to consolidate the pre-eminence of peer review, in line with the Haldane Principle.

“2. That the Government considers amending the Act to include a form of the Haldane Principle as a guiding principle.”

The Universities Australia draft submission notes that ‘the basic premise of the Haldane principle is that it is in the national interest to fund the highest quality research [and that] it is researchers that are best placed to assess research quality…. The power of a Minister to veto individual research grants does not align with the internationally accepted, merit-based peer review mechanism known as the Haldane Principle.’

Q5. Please provide suggestions on how the ARC, researchers and universities can better preserve and strengthen the social licence for public funding of research?

ACU supports recommendations 7 and 8 of the Universities Australia draft submission.

“7. That the ARC replace the National Interest Test (NIT) with the current peer review process that covers the national benefit.
8. The Government support, through the ARC Act, the Haldane principle and remove the provision for Ministerial veto of individual project grants.
a. Should the Government decide to retain the Ministerial veto power, UA recommends that the ARC Act be amended so that in the case of the exercise of the Ministerial veto power, the Government is to set out its reasons in Parliament for the veto of the grant(s).”

Further, ACU recommends that the ARC adopt an integrated approach across schemes to outlining national benefit and the social licence for research. The national benefits of research are not always apparent and may only emerge later through the intersection of new approaches, methods, and subject specialisations. The ARC should be resourced to communicate to diverse audiences how their schemes sustain a rich ecology that creates deep and lasting social and economic benefits for Australia and the international community.

ACU therefore considers it important to widen the notion of national benefit to capture different arenas including academic, social, cultural, economic, education, political, scientific, and environmental benefits.

Outlining these benefits to the wider community in plain English is a good practice but needs to be assessed throughout the whole application, not just in a summary.

Q6. What elements of ARC processes or practices create administrative burdens and/or duplication of effort for researchers, research offices and research partners?

ACU supports recommendation 10 of the Universities Australia draft submission.

“10. That the ARC in its review of grant process improvements develop an application process that significantly reduces the amount of time spent by researchers on unsuccessful grant applications.”

ACU also makes the following additional recommendations on applications:

• Move to a 2-stage application process, with a blind expression of interest (EOI) at the first stage and a later more detailed application for shortlisted applicants. Those highest rated proposals then go to stage 2, for a full assessment including Chief Investigator (CI) details. This is in line with established practice in the Canadian Social Sciences and Humanities Research Council and New Zealand Marsden Fund, among other schemes.

• Institute a consistent, transparent and fixed calendar for the release of rules, application deadlines and funding announcements, where changes to these dates can only happen in exceptional circumstances. Funding announcements should be made no later than three months before the grant begins to provide certainty to researchers.

• Lessen the requirements of international partners who are not receiving any direct funding from the grant upon award, which are currently onerous and disproportionate. The same goes for industry research partners, many of whom do not understand the complexities of the ARC research management system (RMS), their compliance to which creates a deterrent to industry participation. Related to this, having all CIs and industry log into RMS is not efficient, and just the lead CI could log in.

• Delete the Investigator/Capability section of current grant proposals, which replicates the Research Opportunity and Performance Evidence (ROPE) Statement.

• Remove duplication in selection criteria and instructions to applicants.

• The use of one word document to upload additional material is preferable than separate attachments as it would allow researchers to work with their teams on one document instead of multiple documents.

• Make more concise the individual researcher's sections, which currently contain too many sections and are too detailed.

• Use a simple excel spreadsheet for the administrative requirements of ARC grants rather than the current overly complicated template and employ a one-line budget to be less burdensome.

Q7. What improvements could be made:

(a) to ARC processes to promote excellence, improve agility, and better facilitate globally collaborative research and partnerships while maintaining rigour, excellence and peer review at an international standard?

(b) to the ARC Act to give effect to these process improvements, or do you suggest other means?

Please include examples of success or best practice from other countries or communities if you have direct experience of these.

ACU supports recommendation 9 of the Universities Australia draft submission.

“9. That the Government review the approach to managing national security risk in research grants throughout the grant application and assessment process.
This review should consider using the current University Foreign Interference Taskforce (UFIT) mechanisms and the expertise of the national security agencies. It should not rely on the ARC to make judgment calls on issues that are reasonably outside of its scope of expertise.”

In addition to national security, ACU suggests the following improvements to ARC processes.

Schemes:

• Reintroduce Small Grants Scheme in order to provide funding for smaller projects that can be finished within one or two years, which may include seed funding for proof-of-concept applications.

• Establishment of a separate grants scheme for the Humanities, Creative Arts and Social Sciences, reflecting similar opportunities in equivalent bodies like Canada’s SSHRC, New Zealand’s Marsden Fund and the European Research Council.

• Provide more rounds per year for both Linkage and Discovery to be more responsive to needs.

• The Discovery Early Career Researcher Award (DECRA) is no longer serving the important purpose of providing a pathway for postdoctoral researchers into the research ecosystem. Support for postdocs is vital and a more targeted scheme should be considered. Alternatively, some funding schemes could be required to include postdoctoral positions within their projects.

• A network scheme would be useful to foster collaboration across institutions.

• Give the ARC Board the authority to introduce new schemes in line with the objectives of the organisation.

Assessment:

• Increase transparency of assessments. Many other countries pay international reviewers (e.g., Austria, Switzerland, Netherlands) and offer a peer review process with more transparency (Austrian Science Fund). There is no obscure rating hidden in the background and researchers can see the reviewer’s comments and their assessment.

• Remunerate a smaller number of assessors who are leading scholars as is done elsewhere in the world. Assessors who are not leading researchers or who do not have an ARC track record should not be utilised to assess grants as they lack appropriate expertise.

• Make mandatory training for assessors, which would help ensure greater consistency of the quality of reviews.

• Limit assessments to a maximum of three assessors, because responding to more than three is inefficient for all stakeholders.

• Decrease the assessment period, aiming for an assessment period of three to four months.

• Acknowledge diverse forms of research outputs in different disciplines. Non-traditional research outputs are key for emerging disciplines, Indigenous knowledges, and several established fields in the humanities. Assessment guidelines should be clear about their equivalency to other forms of research output.

• For fields where the monograph is prized (e.g., many fields in the Humanities), significant research monographs should be weighted at least 12 points. In disciplines such as philosophy where journal articles, not books, are the gold standard, this should be weighted accordingly. The point ACU makes here is that the ARC should weigh appropriately what is considered the gold standard research output in different fields.

Grant management:

• Create a similar role to the Director of Major Investments for Linkage and Discovery schemes that would allow for continued post-award input from the ARC.

• Embed requirements for high quality project management as occurs in the ARC Major Investments into other ARC programs.

• Allow grants automatically to follow researchers between universities to ensure university interests did not interfere with the ARC’s interest to promote top quality research in an agile fashion (provided the new university has a strong research ecosystem relevant to the awarded grant).

• Introduce a list of routine situations where variations can be made via the RMS without ARC approval (e.g., a change of institution for a CI/PI, discretion to adjust budget items within the grant budget envelope, for example, where there was budget to travel overseas but Covid prevented this, this unused budget could be used to instead hire a research assistant).

• Employ greater use of Permanent Identifiers (PIDs) for institutions, people, outputs, and grants and continued investment in PID noting that Australian universities spend $24 million in curating research metadata.

Q8. With respect to ERA and EI:

(a) Do you believe there is a need for a highly rigorous, retrospective excellence and impact assessment exercise, particularly in the absence of a link to funding?

(b) What other evaluation measures or approaches (e.g. data driven approaches) could be deployed to inform research standards and future academic capability that are relevant to all disciplines, without increasing the administrative burden?

(c) Should the ARC Act be amended to reference a research quality, engagement and impact assessment function, however conducted?

(d) If so, should that reference include the function of developing new methods in research assessment and keeping up with best practice and global insights?

ACU is broadly supportive of the notion that the ERA in _its current form_ has served its purpose. Instead of replicating the current ERA exercise, it is recommended that the Government conduct a modified quality assessment process, in consultation with the sector and other experts. The aim would be to calibrate other quality assurance options aimed at reducing the administrative burden on universities.

Regarding points a) and b) of question 8, ACU makes the following suggestions:

• The costs of producing research outputs and research income have already occurred for the ERA 2023, and many universities have already completed the collection of the ERA meta-data for this exercise. It makes practical sense to conduct a modified quality assessment in parallel with any replacement data collection exercise to explore and address any issues that may disadvantage peer review disciplines in consequent assessments using a data-driven approach.

• The best forms of research assessment involve a combination of approaches, typically involving metrics (i.e. multiple data driven parameters) coupled with informed peer review. (See, for example, the Leiden manifesto for research metrics and the Wilsdon review, The Metric Tide).

• Informed review by expert peers should be a cornerstone of any meaningful research assessment process. Coupling data-driven approaches with informed peer review may well mean that the peer review component is more “light touch” in nature, but it remains an important aspect of ensuring rigour and credibility.

• Other ‘smart approaches’ to research assessment that reduce administrative burden could include simple steps to streamline the evaluation process itself. For instance:
o Don’t measure everything – lift the ERA Low Volume Threshold so we are not assessing Units of Evaluation with only 50 outputs produced within a six-year reference period (particularly at the broad field of research level, i.e., 2-digit FOR). If there are pockets of excellence that institutions want assessed that have such a low volume, then provide a mechanism to opt in for assessment, rather than assessing such small areas of research by default.
o For fields of research that are traditionally considered “peer review disciplines”, don’t require submission of every output. You may not even need a 30% sample of outputs for a panel of expert peer reviewers to make an informed assessment of a unit of evaluation.

• There would be considerable administrative savings arising from these simple streamlining measures: reduction in the amount of effort in sourcing copies of outputs, uploading them into repositories (this can be quite labour intensive), reducing the effort institutions put into deciding what makes it into the 30% peer review sample, curating and transferring the meta data to the ARC, then expecting REC panel members to assess all the submitted content and digest the full 30% sample of nominated outputs.

• The focus on world-leading and world benchmarks puts emphasis on international journals and may not be effective for the new Indigenous 45 FOR code. There will be a reluctance to publish unless in high citation publications which has an impact on Indigenous journals and non-traditional outputs. It means that universities will strategically cite and therefore maybe not code as 45.

• There are indications that universities have learned how to score well on assessment exercises. This is particularly a problem in the citation disciplines, where the ability to game the metrics is easier than in the peer review case. Further assessment is unlikely to produce step-changes in research quality, even if it continues to report (in the citation disciplines at least) better numbers. This is to say that while there were some initial benefits of assessment, including shifting the focus on quantity (given funding drivers) to quality, further benefits are unlikely from more of the same, and inequities built into assessment mechanisms have deleterious effects on disciplines where assessed standards are harder to manipulate.

• There are multiple international ranking and assessment schemes that have, now over nearly a generation, provided a consistent picture of the quality of Australian research. These exercises are becoming more fine-grained, including in the area of impact. So, there are already assessments that show that Australian research is meeting international benchmarks and these assessments will continue on an annual basis, providing reassurance that research quality processes are operating well.

• Other regulatory agencies and processes already require assessment of research quality as part of accreditation reviews.

• Universities have robust internal quality-assurance mechanisms, including through research policies, competitive internal grants, regular performance review, workload management, and promotions processes.

• Other systems (e.g. the UK) can be shown to be very similar to Australia’s in key ways. The evidence from their assessment exercises could therefore be mined for further evidence of the utility and quality of research from universities with similar characteristics.

• A focus on automatic harvesting systems like Scopus of Clarivate could be used for TEQSA purposes but we need assurances of data quality. Data quality seems high for Scopus but very questionable for Clarivate.

• Some data collection can be automated (proportion of journal articles: book chapters; monographs; some citation data, even for peer review disciplines; national and international collaborations; Category 1–4 research income; MOUs between universities and external groups; HDR completions and completion timelines; progress towards SDGs; international rankings by subject; proportion of publications in certain indices, and so on).

• It may also be useful to showcase research in particular parts of the sector from time to time through publicising the results of assessment reviews of selections from groups of disciplines (i.e. take a sampling approach rather than a universal assessment approach, thus minimising the overall sector burden in any one year).

• Most of the information obtained from the ERA process could now be obtained at a fraction of the cost based on existing information using Scopus, Web of Science, or different platforms. For example, the annual international report prepared by Ioannidis and colleagues 2022 Scopus-based ranking.

• A significant expenditure of time and effort is devoted to collecting evidence and generating repository links to support the peer review of research outputs. It would reduce the costs to universities if these links were required to be submitted as permanent links and stored in a dark repository accessible to Australian universities/ARC/NHMRC etc. These could then be used as a resource to support future quality assessment exercise (including the ARC’s grants allocation processes). These would be particularly useful to curtail costs of recollecting and curating the evidence if and when researchers move between universities.

• There is now a mass of data that Australian research is producing strong social, economic, and environmental benefits and expanding the frontiers of knowledge. This is a great resource that could be better utilised. But it is not necessary to keep adding to it at a sector level.

• The ARC’s Engagement and Impact Assessment (EI) is of questionable value in its current form. Engagement case studies encourage universities to focus on projects around which they can create a good story. These engagement case studies are more like marketing exercises than research evaluation. They do not facilitate comparison between universities or growth of disciplinary knowledge.

• Neither ERA nor EI influences industry, government or community expectations or understandings around the excellence of Australian research. Stakeholders like the government or end users do not tend to refer to ERA or EI in their public statements or in their choice of institutions for research partnerships.

• There is also the question of how data-driven systems become culturally responsive and analyse traditional knowledges. The concept of research quality needs to take into consideration the central relationship between Indigenous studies, community, and impact/benefit. We need Indigenous leadership in this space to ensure good practices in the sector.

Regarding points c) of question 8, ACU believes the ARC Act does not need to include reference to a research assessment exercise. Making the Act prescriptive in this case could ossify assessment and cause assessment to continue when it is demonstrably not fit for purpose.

However (and partly in response to point d) of question 8), ACU believes that if a low cost but reliable and valid system of research assessment could be developed it would be useful.

Q9. With respect to the ARC’s capability to evaluate research excellence and impact:

(a) How can the ARC best use its expertise and capability in evaluating the outcomes and benefits of research to demonstrate the ongoing value and excellence of Australian research in different disciplines and/or in response to perceived problems?

(b) What elements would be important so that such a capability could inform potential collaborators and end-users, share best practice, and identify national gaps and opportunities?

(c) Would a data-driven methodology assist in fulfilling this purpose?

ACU recommends the Government continues a modified version of the ERA initiative and considers, in consultation with the sector and other experts, exploring other options to reduce the administrative burden in order to provide the continued assurance of the high-quality research performed by Australian universities.

Regarding point a) of question 9, ACU suggests that the ARC has considerable experience and expertise in the evaluation of research across the research life cycle, from the ideas it funds via various grant programs through to outcomes and impact assessment via the ERA and EI processes. The ARC should continue to draw on its college of experts and mechanisms to convene Research Evaluation Committees as pools of expertise to support these processes. These expert pools could also include representatives from industry and other end-user groups to provide different perspectives on the evaluation process.

Data sets exist which could be mined for data about collaborations (between universities and between universities and other organisations) to promote the quality of Australia’s research ecology through targeted case studies.

There is a risk that impact can become fetishised in the process of assessment to the detriment of basic research. The ARC, in its evaluation of research, should consider that pure/basic research is important and of high quality even when it does not lead to obvious, short-term, impacts (see ACU’s response to Q5 above).

Regarding point b) of question 9, ACU notes that a crucial aspect to making use of the outcomes of any research evaluations undertaken by the ARC would be to involve relevant stakeholders and involve them early in the process. This includes end-users of research having the opportunity to be consulted on the design of the evaluation frameworks before they are implemented as well as potentially being involved in the subsequent assessment process. There should be a clear separation of concerns, though, in determining what is being assessed and by whom when it comes to evaluating research excellence versus research impact. This will be challenging as there will no doubt be competing interests from different stakeholders. An end-user of university research might not be that interested in measures of academic excellence, but they will have interest – and expertise – in assessing impact. The ARC should be confident in utilising different approaches to the different types of assessment.

Regarding point c) of question 9, ACU argues that a “data-driven” methodology could assist in the evaluation of research, but it should not replace entirely the vital role that informed peer review plays in determining research excellence and impact.

It is also unclear what a ‘data-driven methodology’ is. All evaluation worthy of the name is data driven. If the phrase refers to eliminating academic judgement on the basis of crude metrics that are rarely informative and often inequitable in the humanities, then it could only do damage. If the phrase refers to the judicious use of numerical data within a larger interpretive context, then it is essential.

There are significant questions about the data quality of products previously used by the government. Both Scopus and Clarivate’s products have very serious data quality concerns, and while Scopus is better, we have still identified multiple problems for ACU researchers (see also ACU’s response to Q8 above).

As indicated in Q8, the ARC can use existing data that each university has already collected to demonstrate research quality and excellence in Australian universities. The university-based dataset includes important data related to research publications, external collaborations, funding sources, industry partnerships, and HDR enrolment and completion. However, a data-driven methodology only makes sense if there is widespread adoption of high quality PIDs for institutions, researchers, grants, and outputs.

Q10. Having regard to the Review’s Terms of Reference, the ARC Act itself, the function, structure and operation of the ARC, and the current and potential role of the ARC in fostering excellent Australian research of global significance, do you have any other comments or suggestions?

Definition of impact:

• The ARC should develop a concept of impact and engagement beyond the narrow focus on commercialisation or economic benefits (which currently fits better with disciplines such as Sciences and Engineering). In Humanities and Social Sciences, impact and engagement often take the form of building up knowledge foundation, stimulating new thinking, and informing new practices and policy development. (See also ACU’s response to Q5 above).

• Further, the ARC’s definition of impact should also be brought into line with that of the NHMRC. For the NHMRC, a piece of research can have impact if it is used or applied within another piece of research. Thus, research might be impactful without going beyond the research ecosystem. This is not so for the ARC, which demands impact beyond the research ecosystem. This discrepancy between the ARC and NHRMC does not seem justifiable. What is more, it makes it very difficult to demonstrate the impacts of pure/basic research, which tends to have impact on other research first and then beyond the academy later on, and only indirectly. In potentially undermining pure/basic research, the ARC’s definition of impact weakens its core business.


Cost of research:

• It needs to be appreciated that conducting research is not a level playing field. ARC grants cost more money to implement than what is awarded. Hence universities with more resources can more readily conduct research. The costs of infrastructure should be included in research grants. Further, the large reductions in funds awarded needs to be reviewed.


Harmonise government approaches to support research and research translation:

• There are disparate expectations in government about the relationship between basic research, development, and industry.

• ACU would like to see a greater congruence – a substantial joined-up thinking across all areas of government – to influence research translation in universities. This would avoid disparate requirements emerging from various government agencies downstream of fundamental, basic research.

Submission received

10 December 2022

Publishing statement

Yes, I would like my submission to be published and my name and/or the name of the organisation to be published alongside the submission. Your submission will need to meet government accessibility requirements.