- Related consultation
- Submission received
-
Name (Individual/Organisation)
Bart Anderson
Responses
Q1. How could the purpose in the ARC Act be revised to reflect the current and future role of the ARC?
For example, should the ARC Act be amended to specify in legislation:
(a) the scope of research funding supported by the ARC
(b) the balance of Discovery and Linkage research programs
(c) the role of the ARC in actively shaping the research landscape in Australia
(d) any other functions?
If so, what scope, functions and role?
If not, please suggest alternative ways to clarify and define these functions.
There is much lip service given to the support of basic or fundamental research in the ARC, but the applications are now being strongly assessed in 'benefit' that extends beyond the intended goals of such research. I believe that one possible solution to this problem is drop the expectation that researchers conjure possible future applications of fundamental research, which is almost always entirely speculative anyway. There is clear, demonstrable evidence that some of the greatest advances in applications have come from the outcomes of people who are invested in solving foundational problems in science; some explicit provision articulating a true commitment to fundamental research seems direly needed.
Q2. Do you consider the current ARC governance model is adequate for the ARC to perform its functions?
If not, how could governance of the ARC be improved? For example, should the ARC Act be amended to incorporate a new governance model that establishes a Board on the model outlined in the consultation paper, or another model.
Please expand on your reasoning and/or provide alternative suggestions to enhance the governance, if you consider this to be important.
I think there is a strong need to have a board that contains the appropriate mix (50%) of people whose work is dedicated to fundamental research, and those that are involved in applied research critical for the linkage program.
Q3. How could the Act be improved to ensure academic and research expertise is obtained and maintained to support the ARC?
How could this be done without the Act becoming overly prescriptive?
There needs to be some clear language about the composition of different subcommittees and the range of expertise represented within them. One example of his year, there has been a major shortfall in appointing a sufficient number of people in the field of psychology, which the ARC is 'aware of', but have failed to appoint anyone to fill that gap. This has led to a 56% reduction in funding to psychology, presumably because there was insufficient expertise on the panels to evaluate the projects that were submitted (many by historically very successful applicants). This will have spill over effects for years to come, and comes at a particularly bad time when understanding processes of human behaviour, decision making, and cognitive and perceptual processes are critical to developing a public trust in science. The 'college of experts' has to contain sufficient expertise during each cycle of evaluation; a failure to do so can have devastating results for many fields.
Q4. Should the ARC Act be amended to consolidate the pre-eminence or importance of peer review?
Please provide any specific suggestions you may have for amendment of the Act, and/or for non-legislative measures.
The current system greatly over weights the contribution of the two panel members charged with reviewing applications. It is often (and presumably typically) the case that the people reviewing the grants on the panel have no domain specific expertise, yet their scores account for 50% of the initial evaluation. This leads to very evident swings in the kinds of research projects that are funded that can reflect the research biases of the particular panel members (similar issues are widely discussed in peer review processes for publication as well). There is no justification for granting this much weight to the assessment of the panel. It is extremely unlikely (given the workload) that they will read all applications as carefully as reviewers. It is mathematically trivial to give the experts higher weight (e.g., 75%, 80%, etc), which seems to be a less biased system.
The ARC has moved increasingly away from transparency; it used to be possible to know who the panel members were at the time of submission, but this is no longer true. There has never been a justification for the fact that the scores that reviewers give are not reported to the authors of the submission, or why the panel scores are also not revealed. The freedom of information act allows us to retrospectively cull this information, but this shouldn't be necessary. The free availability of the data would also make it evident if/when there are large and systematic discrepancies between particular panel scores and reviewer scores. There does not appear to be any good justification for not providing this information.
Q5. Please provide suggestions on how the ARC, researchers and universities can better preserve and strengthen the social licence for public funding of research?
The "national interest test" seems to directly contradict the purported commitment of the ARC to funding fundamental research. There is clear historical evidence that can be used to demonstrate the importance of such work; it seems like very little is accomplished by attaching NIT statements or aspirational statements about some immediate 'benefit' to the 'community' that attempt to spin every project as having an immediate translational goal. This 'tension' is already described in your consultation paper, which is more than simply a tension, it's internally inconsistent.
Q6. What elements of ARC processes or practices create administrative burdens and/or duplication of effort for researchers, research offices and research partners?
The "ROPE" sections of DP grants are somewhat ludicrous in terms of their time investment. They typically involve lengthy attempts to hype the contribution of the people on the application. This is particularly cumbersome for international PIs, who receive no actual funding for their effort.
The grant should be streamlined to focus on the project; there is a huge time cost in putting applications together, and with current success rates of ~18%, that means 82% of applications can literally lose months of effort on a submission. I know from experience that this actually negatively impacts research output, given that this is a zero-sum game; the time writing grants could have been spent writing up the outputs of my research.
Q7. What improvements could be made:
(a) to ARC processes to promote excellence, improve agility, and better facilitate globally collaborative research and partnerships while maintaining rigour, excellence and peer review at an international standard?
(b) to the ARC Act to give effect to these process improvements, or do you suggest other means?
Please include examples of success or best practice from other countries or communities if you have direct experience of these.
I used to submit to the NIH and NSF in the United States. The applications are much more streamlined.
Q8. With respect to ERA and EI:
(a) Do you believe there is a need for a highly rigorous, retrospective excellence and impact assessment exercise, particularly in the absence of a link to funding?
(b) What other evaluation measures or approaches (e.g. data driven approaches) could be deployed to inform research standards and future academic capability that are relevant to all disciplines, without increasing the administrative burden?
(c) Should the ARC Act be amended to reference a research quality, engagement and impact assessment function, however conducted?
(d) If so, should that reference include the function of developing new methods in research assessment and keeping up with best practice and global insights?
As a former ERA panel member, I think that this is largely waste of money and resources that should be directed into science. In the end, all we did was use citations to assess 'quality', which is a pathological way to assess a contribution to knowledge. The 'best' research is that which is cited the most in 'the field', independently of how it is cited (positively or negatively), and normalized over a ridiculously large estimate of what constitutes a field of research. I could write pages about this, but most of the problems with citations are well known. As a former panel member, I firmly believe that the vast majority of the work done by the committee would have been relatively trivial to automate as a data analysis exercise. The cost to universities, taxpayers, etc. is not justified. It is well established that Australia punches well above it's weight internationally. Unless there is some serious attempt to actually evaluate the CONTENT of the work, then the money is better spent elsewhere. This hasn't been done because the task it too onerous; and it's unclear why any of this matters. I find it ironic that so much money is spent on this exercise yet there is not 'National Interest Test" that I think would satisfy the public in spending so much of their money on this (ultimately pointless) exercise (note: I say pointless because it has had no palpable consequences to date).
Q9. With respect to the ARC’s capability to evaluate research excellence and impact:
(a) How can the ARC best use its expertise and capability in evaluating the outcomes and benefits of research to demonstrate the ongoing value and excellence of Australian research in different disciplines and/or in response to perceived problems?
(b) What elements would be important so that such a capability could inform potential collaborators and end-users, share best practice, and identify national gaps and opportunities?
(c) Would a data-driven methodology assist in fulfilling this purpose?
I think the money is better spent on funding research, stop obsessing about how "Australia" is doing relative to the rest of the world. It's not sport.
Submission received
02 December 2022
Publishing statement
Yes, I would like my submission to be published and my name and/or the name of the organisation to be published alongside the submission. Your submission will need to meet government accessibility requirements.