Evidencing the REF

I am speaking at a conference next week, Research Impact: Evidencing the REF, and in advance wrote a blog post the Open Forum Events website, which was also cross-posted on the HEFCE blog. It is reproduced in full below.

In December of 2014 the results of the most recent national assessment of university research in the UK, the Research Excellence Framework (REF), were published. Representing the culmination of a huge amount of work for universities and the members of the assessment panels, the REF provides a comprehensive assessment of research performance, including, for the first time, an assessment of the wider impact of research on and for society. And the picture painted is a positive one; world-leading research, across a broad discipline base, that delivers huge benefits to society in the UK and beyond.

It’s important to celebrate this success, and to use the wealth of data contained in the REF results and submissions to enhance our understanding of the research base. Analysis of the impact case studies is a particularly rich source of information. At the same time we have also been carrying out detailed evaluations of the process in order to learn lessons for the development of future exercises. Though the ink is barely dry on REF 2014, we are already thinking about how we can further enhance the process and its effectiveness.

There are many issues to explore. How can we reduce the burden? Should the selective nature of the exercise remain? How can peer review judgements be supported by meaningful quantitative data? And many more besides. Two questions are particularly challenging and important.

As we consider how to frame the second national assessment of the broader impact of research we need to decide how to handle impact case studies that were submitted to the previous exercise that are continuing to deliver additional benefits. On the one hand if we were to exclude such cases we would provide an incentive for universities to turn away from long-term benefit generation in favour of developing new areas. We want balance in these choices, so shouldn’t apply pressure in one direction. But if developing impacts are eligible, how will we decide how much additional benefit merits resubmission of an impact case study? Do we risk applying an incentive that discourages the delivery of new areas of impact?

A second important area for consideration is the handling of multi- and interdisciplinary research (MIR) in a future exercise. Evidence from the analysis of the impact case studies from REF2014 demonstrates that the combination of knowledge and expertise from different disciplines plays a central role in the delivery of impact from research. At the same time there remain concerns within the research community that MIR is not always assessed fairly, or that the exercise encourages researchers to focus on the ‘core’ of disciplines rather than the interfaces between them. We are seeking to enhance the evidence base around these questions, and are also exploring ways of further reducing perceived or actual barriers to MIR. For example, should researchers be allowed to submit their research outputs to multiple assessment panels? Or is there a potential to have explicitly identified assessment panel members who bring experience of working across disciplinary boundaries who would act as ‘champions’ of interdisciplinary research?

None of these questions have easy answers, and the Higher Education Funding Bodies won’t develop solutions on our own. We need everyone to contribute ideas, to challenge the ideas of others, to engage in a debate about future research assessment exercises. Over the coming months there will be workshops, informal discussions and a written consultation. The best future process will be shaped with input from the whole research community.