Home

The fifth A of assessing impact

Last month I spoke at a conference on the Strategy of Impact organised by Researchfish. This post is a summary of the argument I made in the talk (see also the slides from the presentation).

In the last decade huge strides have been made in collecting and making use of evidence of the societal impact of research. Largely driven by requirements imposed from outside, researchers and academic institutions have become more adept at understanding the difference that their research makes. This information is largely used for to provide evidence for summative assessments (like the impact element of the Research Excellence Framework), but I think there are opportunities to think more broadly about the purposes of impact evidence, making different uses of the collected data, and also collecting different data.

Collecting evidence about impact can have many different purposes. The clearest approach to thinking about the purposes of research impact assessment is the 4As framework proposed by Molly Morgan-Jones and Jonathan Grant. They propose that impact assessment has four key purposes:

  • Advocacy: collecting impact evidence allows the case to be made for the importance of investment in research.
  • Analysis: we can analyse impact evidence to get a better understanding of how research and impact works, providing the inputs for advocacy, and leading to improved research policy.
  • Accountability: we need evidence of impact to demonstrate that research funding is delivering against funders' objectives.
  • Allocation: the outcomes of impact assessment can be used in the distribution of funding, ensuring that funding goes to places that deliver impact from research, and providing an incentive for institutions and researchers.

These are key purposes of research impact assessment, especially in the context of fixed point, and national-scale assessments. But there is another reason for collecting data and evidence on research impact that is directly related to enhancing the processes of research and knowledge exchange themselves. This is the fifth A of research impact assessment: Adjustment.

Adjustment is very much about gathering evidence in near real-time during the processes of research and knowledge exchange, in order to help to steer the work and maximise impact. It is about the signals of emerging impact, the places and people with whom the research findings are having resonance. It is about potential not being realised, and asking 'why not?'. It is about experimenting with different approaches to knowledge exchange, and determining which is working best.

In contrast with the other four As, adjustment needs data that can be collected quickly and with minimal effort, and will often focus on intermediate signals rather than evidence of impact that has already occurred. It will involve data sources that might not be sufficient for after-the-fact assessment. For example, signals of dissemination or interest can be extremely valuable in real-time analysis because of their immediacy. When an academic paper is published they can give indications of interest from perhaps unexpected stakeholder groups. This information, in turn, can inform new and targeted interventions that deliver research findings to places where they can be used.

Realising the potential of Adjustment will need a some changes in the practices of researchers, but also improvements in the collection of data about emerging impact. Altmetrics are potentially helpful in providing real-time insight into how and when research outputs are being used, but their use is limited to tracking outputs, and specifically those with a Digital Object Identifier (DOI). Most of these outputs are scholarly journal articles, but we know that the generation of impact can happen in parallel to, or even in advance of scholarly outputs.

What we need are more sophisticated tools than can detect signals of research use, and this will potentially involve different sources of data. For example, the widely used event management tool, Eventbrite, provides data on what is being talked about and by whom that is potentially useful in detecting signals of impact, or new audiences for research findings. The Open Data Institute has been exploring using this information as part of an approach to identify geographical clusters of a activity aligned to specific digital technologies.

The development of ways of collecting new data about research and knowledge exchange in action is ripe for innovation. While investment in research clearly delivers significant benefits to society already, I think we can do better. Using evidence for adjustment to maximise impact offers the potential to do just that.