How many research mavericks do we need?

The topic of ‘scientific mavericks’ is receiving some attention at the moment. There was a letter, signed by a number of Nobel laureates and FRSs among others, published this week in the Guardian suggesting that the current structures inhibit mavericks, and that this is a problem. And this follows on the heels of comments by Peter Higgs last year. The arguments in the Guardian letter are flawed in my view (there is an excellent post by Jenny Rohn that sets out why), and based on a very particular reading of history.

But the question of providing space within the research policy landscape for the exploration of radical ideas and approaches is an important one.

Of course, in the policy world we more often talk about ‘high risk/high reward’ research rather than mavericks, but the basic concept is the same. The question that is regularly debated by research policy people is how to ensure that the structures and environment we create around research don’t produce barriers that prevent researchers from trying out ‘crazy ideas’. In thinking about how to tackle this question there are a number of challenges. A big one is that we don’t actually know how much ‘high risk/high reward’ research is going on. As Jenny indicates in her post, researchers find ways to do this type of work, and, by definition, most of it will fail, so we will never hear about it. The successful projects – the extraordinary findings supported by extraordinary evidence – can potentially look very mainstream quite quickly, so the visibility of successful ‘high risk/high reward’ research can be low too. In the absence of good evidence on the volume of activity it’s hard to know whether the current policy environment is really encouraging or inhibiting.

By far the biggest challenge in formulating policy for ‘high risk/high reward’ research, however, is deciding how much research of this type is needed. I think there definitely should be space for some, but it is also self-evident that we don’t want all research to be like this. That would result in too much public money being spent on research that leads nowhere, not even advancing its discipline, let alone bringing wider benefits to society. As in many areas of research policy, we need to find the appropriate balance point.

Part of the responsibility for addressing this issue lies with researchers and universities themselves. Through the UK’s dual funding system, universities have access to funding, provided by HEFCE and the other UK funding bodies, that could be used to support ‘high risk/high reward’ research. Whether the block grant funding is used in this way, and, if it isn’t, why not, are interesting questions worth exploring. It is the case that researchers and universities are themselves well placed to decide how much ‘high risk/high reward’ research should take place. Perhaps increasing the proportion of research funding that is distributed through this route might shift the balance, and a allow a few more mavericks to be nurtured?

Posted in research policy | Tagged , , | Leave a comment

Open access and academic freedom

There is always a point in any policy-making process where a clash of objectives or values takes place. Sometimes different aspects of what you are trying to achieve turn out, on analysis, to be in conflict with one another. In other cases, it emerges that objectives are in opposition to principles which were held to be important at the start of the process. These dilemmas are one aspect that makes policy-making a challenge and enjoyable, but also they often result in some stakeholders, sometimes all stakeholders, ending up unhappy with the outcome.

There is one of these dilemmas at the heart of making policy about open access to research publications. The whole point of an open access policy is to maximise openness. But this is in conflict with a core value that underpins research and higher education policy-making – the principle of academic freedom. If we take this to include the freedom to publish research in whatever journal researchers wish too, then it follows that an open access policy must allow publication in journals that restrict access in order to respect academic freedom. But will a policy with such a loop-hole ever achieve its objective of more openness?

It would be easy to think that confronting dilemmas like this is nothing but a problem, but in fact it is incredibly helpful. It forces you to think deeply, and challenge the assumptions that underpin principles and objectives. In this example, is it really the case the academic freedom demands absolute control over the vehicle of publication? On reflection, I think the core of academic freedom is the ability to publish ideas at all, and that, especially now, doesn’t rest on using a particular academic journal, or even an academic journal at all. It could even be argued that the peer review process is itself a barrier to academic freedom, imposing censorship, restricting the dissemination of ideas to those approved of by peers. I am not sure I would go that far, but the more I consider this issue, the more I am convinced that the prize of maximum dissemination is more important that maintaining an excessive interpretation of the notion of academic freedom.

 

Posted in research policy | Tagged | 1 Comment

Open data reflections

I am spending a lot of time thinking and reading about open research data at the moment, in preparation for some policy development work in the New Year. I am personally positive about the open research data agenda, seeing lots of benefits to more sharing of data, both for research itself and wider. There are some strong public-good arguments for making data available, including, but not limited to, potential economic benefit.

It is common to find these positive arguments articulated. As I have argued recently in a post on the Sciencewise blog, any policy intervention around open research data needs to balance the benefits against the costs, which are not so well understood. There are also complex issues around the politics of open data that have been discussed in an excellent blog post by Rob Kitchen. The post expands on four critiques of open data arguing that it:

  • lacks a sustainable financial model;
  • promotes a politics of the benign and empowers the empowered;
  • lacks utility and usability; and
  • facilitates the neoliberalisation and marketisation of public services.

The post concludes that

we lack detailed case studies of open data projects in action, the assemblages surrounding and shaping them, and the messy, contingent and relational ways in which they unfold. It is only through such studies that a more complete picture of open data will emerge, one that reveals both the positive and negatives of such projects, and which will provide answers to more normative questions concerning how they should be implemented and to what ends.

Real food for thought there, and a welcome stimulus to reflect on the assumption that open data is inevitably positive.

 

Posted in research policy | Tagged , | Leave a comment

On impact

Innovation is a complex business, but Seth Godin has some thought-provoking words in his post today:

Inventing isn’t the hard part. The ideas that change the world are changing the world because someone cared enough to stick it out, to cajole and lead and evolve.

It seems to me he has captured something important there about impact, and what needs to be done to achieve it.

Posted in research policy | Tagged , | Leave a comment

Woodpeckers

It’s been a tough year for Great Spotted Woodpeckers. I know this because I was lucky enough to speak recently to a former work colleague who now publishes about woodpeckers in her retirement. The problem this year has been the cold spring, which dramatically impacted the caterpillars that the woodpeckers rely on.

This is also evident in my garden. In previous years woodpeckers have been just occasional visitors to the garden feeders, and always in the coldest winter weather. But this year they have been much more regular visitors in the late spring and summer when they will have been rearing young. A couple of weeks ago this was confirmed when not only the adult, but also a juvenile woodpecker could be seen. The juvenile is the one with the completely red head in the this photo.

The good news is that this family seems to have fledged at least on chick. Across the country I suspect garden feeders have been important this year.

Posted in nature | Leave a comment

Open data: technology and culture

Last week I attended an informal dinner at Royal Society of Chemistry to discuss their plans for enabling better data management and sharing in the chemical sciences. It was a private discussion, so I won't share the details, but a thread ran through the conversation that crops up regularly in the context of open data discussions. What are the barriers for researchers to better manage and share their data?

One barrier is a technological one. Researchers lack the systems they need to share data in ways that are discoverable and usable by other researchers. The technological barrier has many forms. It could be about the infrastructure, or standards for interoperability, or a lack of money to operate the systems.

The second barrier is cultural. Even if the technology were perfect, would researchers in sufficient numbers actually us the systems and share data? This often depends on norms within disciplines, on the relative value that is placed on collecting or interpreting data. Policy instruments – incentives and sanctions – are often cited as the solution to this barrier.

The reality is that both of these barriers and real, and need to be tackled. The biggest challenge is that these two barriers are linked to one another. It's too easy to develop wonderful tools that don't fit with the culture of a discipline, or to believe that a policy instrument will bring about change in the absence of appropriate infrastructure. Debating which barrier is more important, or expecting action on one alone to bring about change, are unlikely to achieve the open data revolution.

 

Posted in research policy | Tagged , | Leave a comment

Assessing impact

One of the ironies of working in research policy is how little research evidence there is to inform policy-making decisions. And this is especially true in the relatively new area of assessing the wider impacts of research. An excellent new contribution to this sparse evidence base has recently been published by Holbrook and Hrotic, and it’s available[pdf] open access too.

The paper is nicely written, clear and accessible. It reports the outcomes of a survey looking at attitudes towards peer review of research proposals, the relationship between research and its wider impacts, and the extent to which those wider impacts can be assessed. The survey approach raises some questions (acknowledged by the authors), which means that the conclusions can’t necessarily be generalised, but it nonetheless makes a big contribution in my view, As the authors point out, perhaps a little tongue-in-cheek:

we decided the methodological risk was worth the intellectual (and potentially societal) reward: our aim was to have a broader impact rather than simply adhering to standards of intrinsic worth.

The survey group included academics, people from within funding agencies, and those without affiliation, although the latter group were self-selecting and we drawn from people who had expressed an interest in peer review of research.

The findings are interesting and are broadly conserved across the different groups:

  • A significant degree of conservatism was apparent, with participants showing a strong preference for limiting peer review to experts in the immediate discipline of the proposed research.
  • There was also a strong preference favouring narratives of research that were strong on autonomy, and didn’t see direction towards wider societal benefits as being appropriate.
  • But despite both of the former points, there was a strong confidence that researchers in the discipline concerned were well placed to assess the wider impacts of proposed research.

It is the final point which is especially telling, as one of the regular objections made to the inclusion of ‘impact’ criteria in the assessment of research is that researchers are not best placed to make the assessment. But the participants of this survey didn’t reflect that, and, as Holbrook and Hrotic point out, if researchers don’t assess wider impacts themselves, then who will?

Scientists who resist including impacts as part of peer review ought to consider the possibility that if scientists do not determine impacts, someone else will determine impacts for them. Determination to resist impacts may well undermine scientific autonomy.

The central point is that resistance to impact is more about disagreement with the notion of impact, and less about its practical application:

the scientific community’s resistance to including societal impact considerations as part of peer review seems not to be based on a perceived inability to judge societal impacts. Instead, it seems, scientists simply do not want to address societal impacts considerations. They would prefer that proposals be judged on their intrinsic merit by disciplinary peers, trusting the fact that funding good science will somehow lead to societal benefits.

This paper is interesting in the context of the assessment of impact retrospectively in the current Research Excellence Framework (REF). In this case the group of peers that will assess impact will be made up of both researchers with a disciplinary focus and users of that research. There are some interesting questions around how the interactions between these two groups will play out in practice. Will they come to the same conclusions? If not, how will differences be resolved? I hope we will be able to explore these and other questions are part of the wider evaluation of the REF.

Finally, my attention was also captured by another point made in the paper, concerning the resistance of funding agencies to research into their assessment processes:

there are structural reasons that tend to discourage research on peer review, especially peer review of grant proposals. Public science funding agencies are generally committed to confidentiality regarding reviews and proposals. Such agencies are also budget-sensitive, meaning that they do not want to open themselves up to studies that may endanger their budgets. [emphasis added]

I can certainly recognise the second point from my own experience. I think this is one of the factors leading to the dearth of good evidence to inform research policy that I mentioned at the start of this post. If we are going to make research policy decisions that are informed by evidence, then we need to open up to effective research and evaluation, and take the associated risks for the greater good.

Posted in research policy | Tagged , , | 1 Comment