Thinking on research impact, meta research, and whatever else is on my mind.

Home

Assessing impact

One of the ironies of working in research policy is how little research evidence there is to inform policy-making decisions. And this is especially true in the relatively new area of assessing the wider impacts of research. An excellent new contribution to this sparse evidence base has recently been published by Holbrook and Hrotic, and it’s available[pdf] open access too.

The paper is nicely written, clear and accessible. It reports the outcomes of a survey looking at attitudes towards peer review of research proposals, the relationship between research and its wider impacts, and the extent to which those wider impacts can be assessed. The survey approach raises some questions (acknowledged by the authors), which means that the conclusions can’t necessarily be generalised, but it nonetheless makes a big contribution in my view, As the authors point out, perhaps a little tongue-in-cheek:

we decided the methodological risk was worth the intellectual (and potentially societal) reward: our aim was to have a broader impact rather than simply adhering to standards of intrinsic worth.

The survey group included academics, people from within funding agencies, and those without affiliation, although the latter group were self-selecting and we drawn from people who had expressed an interest in peer review of research.

The findings are interesting and are broadly conserved across the different groups:

  • A significant degree of conservatism was apparent, with participants showing a strong preference for limiting peer review to experts in the immediate discipline of the proposed research.
  • There was also a strong preference favouring narratives of research that were strong on autonomy, and didn’t see direction towards wider societal benefits as being appropriate.
  • But despite both of the former points, there was a strong confidence that researchers in the discipline concerned were well placed to assess the wider impacts of proposed research.

It is the final point which is especially telling, as one of the regular objections made to the inclusion of ‘impact’ criteria in the assessment of research is that researchers are not best placed to make the assessment. But the participants of this survey didn’t reflect that, and, as Holbrook and Hrotic point out, if researchers don’t assess wider impacts themselves, then who will?

Scientists who resist including impacts as part of peer review ought to consider the possibility that if scientists do not determine impacts, someone else will determine impacts for them. Determination to resist impacts may well undermine scientific autonomy.

The central point is that resistance to impact is more about disagreement with the notion of impact, and less about its practical application:

the scientific community’s resistance to including societal impact considerations as part of peer review seems not to be based on a perceived inability to judge societal impacts. Instead, it seems, scientists simply do not want to address societal impacts considerations. They would prefer that proposals be judged on their intrinsic merit by disciplinary peers, trusting the fact that funding good science will somehow lead to societal benefits.

This paper is interesting in the context of the assessment of impact retrospectively in the current Research Excellence Framework (REF). In this case the group of peers that will assess impact will be made up of both researchers with a disciplinary focus and users of that research. There are some interesting questions around how the interactions between these two groups will play out in practice. Will they come to the same conclusions? If not, how will differences be resolved? I hope we will be able to explore these and other questions are part of the wider evaluation of the REF.

Finally, my attention was also captured by another point made in the paper, concerning the resistance of funding agencies to research into their assessment processes:

there are structural reasons that tend to discourage research on peer review, especially peer review of grant proposals. Public science funding agencies are generally committed to confidentiality regarding reviews and proposals. Such agencies are also budget-sensitive, meaning that they do not want to open themselves up to studies that may endanger their budgets. [emphasis added]

I can certainly recognise the second point from my own experience. I think this is one of the factors leading to the dearth of good evidence to inform research policy that I mentioned at the start of this post. If we are going to make research policy decisions that are informed by evidence, then we need to open up to effective research and evaluation, and take the associated risks for the greater good.