AI and systematic reviews

Out of date before it’s published

Jon Brock, writing in Nature Index last month, discusses how the use of artifical intelligence is revolutionising the generation of systematic reviews.

Looking at a fast-moving research field, research into Zika virus, he reports that:

by the time their review was published in January 2017, it was already out of date. In the eight months it had taken to synthesise the evidence and pass peer review, another 1,400 new Zika-related papers had been added to the scientific literature.

A solution to this is the idea of the living (i.e. constantly updated) systematic review, and a new approach is being pioneered by Cochrane:

Central to Cochrane’s technological revolution is a semi-automated ‘evidence pipeline’ that combines machine learning with a crowd-sourcing platform.

Brock also reports that researchers working with Cochrane are experimenting with further automating the process of including new articles into reviews:

A demo, now live on the team’s website, allows users to drag and drop a PDF and within a few seconds receive a report with key information from the paper and a preliminary risk of bias rating. It’s not ready for use in the production of systematic reviews, but Mavergames is excited by its potential. “If it gets good enough that we can let a machine do one review and a human do the other, that’s huge,” he says. “It’s a 50% efficiency saving.”

The potential for technologies like this is indeed huge, with applications beyond the production of systematic reviews, and is set to revolutionise many aspects of research and its management.


Written on August 30, 2019

Creative Commons Licence
© 2019 Steven Hill. Unless otherwise stated, this work is licensed under a Creative Commons Attribution 4.0 International License.