iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://www.ncbi.nlm.nih.gov/pubmed/26073974
Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2015 Jun 15:4:80.
doi: 10.1186/s13643-015-0067-6.

Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers

Affiliations

Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers

John Rathbone et al. Syst Rev. .

Abstract

Background: Citation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening.

Methods: Four systematic reviews (aHUS, dietary fibre, ECHO, rituximab) were used to evaluate Abstrackr. Citations from electronic searches of biomedical databases were imported into Abstrackr, and titles and abstracts were screened and included or excluded according to the entry criteria. This process was continued until Abstrackr predicted and classified the remaining unscreened citations as relevant or irrelevant. These classification predictions were checked for accuracy against the original review decisions. Sensitivity analyses were performed to assess the effects of including case reports in the aHUS dataset whilst screening and the effects of using larger imbalanced datasets with the ECHO dataset. The performance of Abstrackr was calculated according to the number of relevant studies missed, the workload saving, the false negative rate, and the precision of the algorithm to correctly predict relevant studies for inclusion, i.e. further full text inspection.

Results: Of the unscreened citations, Abstrackr's prediction algorithm correctly identified all relevant citations for the rituximab and dietary fibre reviews. However, one relevant citation in both the aHUS and ECHO reviews was incorrectly predicted as not relevant. The workload saving achieved with Abstrackr varied depending on the complexity and size of the reviews (9 % rituximab, 40 % dietary fibre, 67 % aHUS, and 57 % ECHO). The proportion of citations predicted as relevant, and therefore, warranting further full text inspection (i.e. the precision of the prediction) ranged from 16 % (aHUS) to 45 % (rituximab) and was affected by the complexity of the reviews. The false negative rate ranged from 2.4 to 21.7 %. Sensitivity analysis performed on the aHUS dataset increased the precision from 16 to 25 % and increased the workload saving by 10 % but increased the number of relevant studies missed. Sensitivity analysis performed with the larger ECHO dataset increased the workload saving (80 %) but reduced the precision (6.8 %) and increased the number of missed citations.

Conclusions: Semi-automated title and abstract screening with Abstrackr has the potential to save time and reduce research waste.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Percentage of citations predicted by Abstrackr that were relevant for further full text inspection. *Raw numbers of the proportion of citations selected for inspection
Fig. 2
Fig. 2
False negative rate. *Raw numbers of the proportion of citations incorrectly predicted by Abstrackr to be irrelevant for further inspection
Fig. 3
Fig. 3
Percentage of studies missed by Abstrackr—but were included in the reviews. *Raw numbers of the proportion of citations missed (predicted not relevant)
Fig. 4
Fig. 4
Workload saving (%) when using Abstrackr in each of the four datasets. *Raw numbers of the proportion of citations predicted not relevant from the total

Similar articles

Cited by

References

    1. Rathbone J, Kaltenthaler E, Richards A, Tappenden P, Bessey A, Cantrell A. A systematic review of eculizumab for atypical haemolytic uraemic syndrome (aHUS) BMJ Open. 2013;3:1–11. doi: 10.1136/bmjopen-2013-003573. - DOI - PMC - PubMed
    1. Frunza O, Inkpen D, Matwin S. Building systematic reviews using automatic text classification techniques. Stroudsburg, PA, USA: Proceedings of the 23rd International Conference on Computational Linguistics; 2010.
    1. Hoffmann T, Erueti C, Thorning S, Glasziou P. The scatter of research: cross sectional comparison of randomised trials and systematic reviews across specialties. BMJ. 2012;344:e3223. doi: 10.1136/bmj.e3223. - DOI - PMC - PubMed
    1. Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group. 2009. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. BMJ. 2009;339:b2535. The PRISMA Statement. http://www.prisma-statement.org/statement.htm. - PMC - PubMed
    1. GRADEpro. 2015. http://www.guidelinedevelopment.org/. Accessed 2014.

Publication types

LinkOut - more resources