Thinking on research impact, meta research, and whatever else is on my mind.

Home

The end of journal impact factors?

Last week it was great to see the publication of the San Francisco Declaration on Research Assessment (DORA), which sets some important principles for the assessment of research. Advising on whether the Higher Education Funding Council for England should sign the declaration was one of my first tasks in my role there and I am really pleased that we are a founding signatory. Central to the declaration is the notion that journal impact factors should not be used as proxies for the assessment of the quality of research outputs. Quite right – this excellent post by Stephen Curry explains why.

In the major research assessment that we run, the Research Excellence Framework (REF), it is already clearly stated that journal impact factors will not be used (just search for ‘impact factor’ in the assessment panel working methods [pdf]). But clearly there is still a problem. The point is repeatedly made that universities are using journal impact factors in their internal decision-making and staff management processes relating to the REF.

Why is it that the use of impact factors persists in the face of clear policy guidance that they don’t form part of the assessment? Being able to answer this is central to tackling the problem. I can think of two possible explanations:

  • Despite the clarity of the panel working methods, there is a belief that panels will, in fact, use impact factors in their assessment of research outputs.
  • Within Universities, senior staff and managers feel that they lack the time or the expertise to come to a judgement on the quality of research outputs, so reach for the easy proxy of journal impact factor.

Maybe there are other explanations that I am missing. Either way I would be interested in hearing what people think. Why do journal impact factors continue to be used inappropriately? All thoughts in the comments thread would be most welcome.