Thinking on research impact, meta research, and whatever else is on my mind.

Home

Measuring disruption in research articles

In 1962, Thomas Kuhn's book, The Structure of Scientific Revolutions, set down an central notion in the study of research, the idea that knowledge accumulation does not occur at a steady rate. Based on historical analysis, Kuhn argued that while much research adds incrementally to existing knowledge, building on current theories and understanding, there are also points where revolutionary ideas, theories or observations lead to radical changes of direction. Kuhn's ideas have been much debated, but the general principle remains that some research is more disruptive to the accumulation of knowledge, while some is more developmental. This is relevant for policy-makers, as an effective research system needs to support both disruptive and developmental research in an appropriate balance.

Earlier this year a paper1 was published that looks at this question from an empirical perspective, and at a large scale2. The advances and insights from this work have significant implications for the study of research systems, which in turn may have practical implications for the design and deployment of policy. The paper looks at the question of disruption vs development for patents and software, as well as journal articles, but I am focussing on the latter in this post.

The first contribution that the paper makes is the development and validation of a measure of the extent to which individual research articles are disruptive or developmental in their contribution to the accumulation of knowledge. The measure builds on previous work studying patents, and uses the citation network of articles (there is an authored-archived copy of this article available). An article scores highly on disruption if the articles that cite it tend not to cite articles from the reference list of the article under consideration. On the other hand, an article that scores highly as developmental tends to get cited alongside articles from its reference lists. The index varies between -1 (completely developmental) and +1 (completely disruptive).

As well as making intuitive sense, an important strength of the paper is that the measure of disruption has been validated. The authors surveyed researchers, and asked them for examples of both disruptive and developmental articles. These researcher-identified articles scored appropriately on the disruption/development measure. Among other things, they also examined the language used in the title of articles, with those scoring highly for disruption tending to use words consistent with disruption and the converse applying to articles scoring highly as developmental.

While there is plenty of evidence that the measure used is robust, like any research-related metric there are some limitations. The subject discipline coverage is limited to those areas where journal articles are the primary route for research dissemination. Not only are other forms of scholarly output missing from the index, but journal articles that are disruptive into fields where different forms of communication are the norm are also missing from the analysis.

It is also important to be clear what is being measured here - disruption vs developmental contribution - which are do not necessarily equate directly with ideas of quality or excellence. A healthy research system needs a mix of disruptive and developmental work, so this is definitely not an index to be maximised in either direction.

Even bearing these limitations in mind, the findings of the work are significant and important. The headline finding from the paper is that team size correlates negatively with disruption, or put another way, small teams are much more likely to publish disruptive articles than large ones. The work of large teams tends to be more developmental.

The difference related to team size is less marked for articles that are highly cited. Articles in the top 10% of citations (field adjusted) tend to score highly for disruption, and at similar rates independent of team size. In contrast the team size effect is very marked for articles in the bottom 5% of citations, and articles in this group with small team sizes are the most likely to be disruptive. Of course, it follows from this that traditional counts of citations are limited predictors of disruption. While, in general, highly cited article are more likely to be disruptive, some of the most disruptive work is cited at much lower rates.

The authors go through a lot of steps to test the robustness of this finding, and it survives all the tests. There is one caveat in that team size is defined in relation to the authorship of the article. While long author lists probably reflect large teams sizes, it is possible that some articles with shorter author lists orginate in larger teams. However if this is the case it would tend to minimise the scale of the team size effect described in the paper.

There are some disciplinary differences in this effect, most notably that it does not apply to computer science or engineering, which the authors suggest might be related to importance of conference proceedings rather than journal articles in these disciplines. But otherwise smaller teams are more likely to publish disruptive work. The implications for policy are clear; mechanisms are needed that allow small teams to flourish in order to generate disruptive research.

A second fascinating finding relates to the relationship between source of funding for research and the tendency to lead to articles that are disruptive or developmental. This was studied by looking at articles which acknowledge funding from one of a number of national funding agencies, and comparing those articles to a matched control group of articles that did not acknowledge funding. This methodology, of course, has some limitations, in that some authors may omit funder acknowledgements. As the set of 'unfunded' articles may contain those that have, in fact, been grant funded, the differences seen will be minimised. Even so, there is a clear effect. Funded articles are similar in terms of their tendency towards disruption to 'unfunded' articles for large team sizes, but for small teams there are marked differences. 'Unfunded' articles from small teams show a higher propensity to disruption, in line with the broader pattern described, but funded articles from small teams are less likely to be disruptive than those from larger teams. Overall, the funded articles have a higher proportion of developmental than disruptive research.

One plausible explanation for this effect is that the extensive peer review process associated with funded research tends to be less favourable for disruptive research. In terms of policy, this suggests that a diversity of funding approaches, some of which are not dependent on strong review of research proposals in advance, is needed to support a balance of disruptive and developmental research.

From a UK perspective, the paper doesn't analyse articles that acknowledge funding from UK funders, but there are some interesting datasets that would enable this. For UKRI grant funding, the Gateway to Research database contains a large set of articles that are attributed to grant funding, which could be compared to a matched set of UK publications. It might also be interesting to look at the articles listed as related research in the REF 2014 Impact Case Study database, to investigate whether societal and economic impact tends to be linked to disruptive or developmental research.

Overall, this paper makes and important contribution, both in developing an approach to looking at the question of disruptive vs developmental research, and sheding light on how this issue plays out in practice. There is already much to think about for policy-makers, and it would be good to see the method extended to tackle further policy questions relating to research funding.

  1. The full reference is: Wu, Lingfei, Dashun Wang, and James A. Evans. 2019. “Large Teams Develop and Small Teams Disrupt Science and Technology.” Nature 566 (7744): 378–82. https://doi.org/10.1038/s41586-019-0941-9

  2. Thanks to Steve Wooding who drew the paper to my attention.