Last week I spoke at the annual conference of the UK Research Integrity Office. The slides I used have been published, and this post sets out the argument that I made.
Research misconduct happens. There is less certainty about how much research misconduct, but there is no doubt that we need a policy response to address the issue. In formulating this response it is important to acknowledge that the term ‘misconduct’ covers a range of activities, with different motivations, and levels of seriousness. And that this variation will influence the policy response.
At one end of the spectrum are deliberate acts by researchers to falsify or misrepresent research. I believe this is rare, although it often the type of misconduct that gets the most publicity. Further down the range are small acts of misconduct that, while deliberate, aren’t motivated by a desire to falsify the conclusions of the research. In this category I would include leaving out inconvenient data points, or perhaps even inconvenient experiments, that challenge conclusions that the researcher strongly believes, often on the basis of other evidence, to be true. At the other end of the spectrum is less deliberate misconduct, that stems from lack of knowledge or expertise on the part of the researcher, or possibly the cutting of corners to speed up the research process. This is accidental misconduct where the researcher doesn’t realise that anything is wrong.
Regulation – rules, processes to enforce them, and sanctions applied when they are broken – clearly has an important part to play across this spectrum, and is probably the only policy option for tackling deliberate and large-scale fraud. However, in tackling most misconduct there is an essential and important role in ensuring that we have the right culture within our universities and research organizations. We need to focus on stopping misconduct from happening; catching misconduct after the event is both a very costly approach, but it is also less effective to allow poorly conducted or fabricated research to be disseminated.
If we accept that tackling misconduct is about changes to the whole culture within which research is conducted, then rather than being a specific ‘policy issue’ it becomes a cross-cutting theme that underpins the whole research policy agenda. We need to consider the implications for the research culture of all the policy interventions we make: from the design of funding processes, to the mechanisms of research assessment. The current investigation by the Nuffield Council on Bioethics will, no doubt, provide some useful insights into the relationship between policy interventions and the research culture that emerges.
There is one aspect of the research culture that stands out for me – the extent to which research is conducted openly, and especially the extent to which research data is made available for scrutiny and further analysis. As has been eloquently argued by Andrew Rawnsley there is, or ought to be, an intimate link between research integrity and research openness.
This is not just because the sharing of data allows other researcher to check the findings of research. There is also reason to believe that a presumption of openness will influence research cultures in a positive way. For example, a study carried out in 2011 by Wicherts, Bakker and Molenaar concluded that:
the reluctance to share data [was] associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance.
As these authors rightly point out the study describes a correlation, and it is not possible to draw a causal inference. They propose a number of possible explanations, but one is particularly compelling:
statistically rigorous researchers may archive their data better […and,so…] will more promptly share their data
There could be a deep cultural linkage between the rigour with which data is analysed, the care with which data is curated and the willingness to share. Or maybe the act of sharing itself encourages a further review of the data and its analysis, a point recently well made by Dorothy Bishop. Either way, data openness should be part of the research culture that supports and fosters research integrity.
As I have written previously, opening up research data is not without its challenges. But I would argue that the benefits that more openness will bring for research integrity are significant, and provide part of the case for universities, funders and journals intervening to foster the culture of open research.