Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers
- PMID: 26073974
- PMCID: PMC4472176
- DOI: 10.1186/s13643-015-0067-6
Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers
Abstract
Background: Citation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening.
Methods: Four systematic reviews (aHUS, dietary fibre, ECHO, rituximab) were used to evaluate Abstrackr. Citations from electronic searches of biomedical databases were imported into Abstrackr, and titles and abstracts were screened and included or excluded according to the entry criteria. This process was continued until Abstrackr predicted and classified the remaining unscreened citations as relevant or irrelevant. These classification predictions were checked for accuracy against the original review decisions. Sensitivity analyses were performed to assess the effects of including case reports in the aHUS dataset whilst screening and the effects of using larger imbalanced datasets with the ECHO dataset. The performance of Abstrackr was calculated according to the number of relevant studies missed, the workload saving, the false negative rate, and the precision of the algorithm to correctly predict relevant studies for inclusion, i.e. further full text inspection.
Results: Of the unscreened citations, Abstrackr's prediction algorithm correctly identified all relevant citations for the rituximab and dietary fibre reviews. However, one relevant citation in both the aHUS and ECHO reviews was incorrectly predicted as not relevant. The workload saving achieved with Abstrackr varied depending on the complexity and size of the reviews (9 % rituximab, 40 % dietary fibre, 67 % aHUS, and 57 % ECHO). The proportion of citations predicted as relevant, and therefore, warranting further full text inspection (i.e. the precision of the prediction) ranged from 16 % (aHUS) to 45 % (rituximab) and was affected by the complexity of the reviews. The false negative rate ranged from 2.4 to 21.7 %. Sensitivity analysis performed on the aHUS dataset increased the precision from 16 to 25 % and increased the workload saving by 10 % but increased the number of relevant studies missed. Sensitivity analysis performed with the larger ECHO dataset increased the workload saving (80 %) but reduced the precision (6.8 %) and increased the number of missed citations.
Conclusions: Semi-automated title and abstract screening with Abstrackr has the potential to save time and reduce research waste.
Figures
Similar articles
-
Technology-assisted title and abstract screening for systematic reviews: a retrospective evaluation of the Abstrackr machine learning tool.Syst Rev. 2018 Mar 12;7(1):45. doi: 10.1186/s13643-018-0707-8. Syst Rev. 2018. PMID: 29530097 Free PMC article.
-
A text-mining tool generated title-abstract screening workload savings: performance evaluation versus single-human screening.J Clin Epidemiol. 2022 Sep;149:53-59. doi: 10.1016/j.jclinepi.2022.05.017. Epub 2022 May 30. J Clin Epidemiol. 2022. PMID: 35654270
-
Performance and usability of machine learning for screening in systematic reviews: a comparative evaluation of three tools.Syst Rev. 2019 Nov 15;8(1):278. doi: 10.1186/s13643-019-1222-2. Syst Rev. 2019. PMID: 31727150 Free PMC article.
-
Comparison of a traditional systematic review approach with review-of-reviews and semi-automation as strategies to update the evidence.Syst Rev. 2020 Oct 19;9(1):243. doi: 10.1186/s13643-020-01450-2. Syst Rev. 2020. PMID: 33076975 Free PMC article. Review.
-
Expediting citation screening using PICo-based title-only screening for identifying studies in scoping searches and rapid reviews.Syst Rev. 2017 Nov 25;6(1):233. doi: 10.1186/s13643-017-0629-x. Syst Rev. 2017. PMID: 29178925 Free PMC article. Review.
Cited by
-
An exploration of available methods and tools to improve the efficiency of systematic review production: a scoping review.BMC Med Res Methodol. 2024 Sep 18;24(1):210. doi: 10.1186/s12874-024-02320-4. BMC Med Res Methodol. 2024. PMID: 39294580 Free PMC article. Review.
-
Cutting-Edge Methodological Guidance for Authors in Conducting the Systematic Review and Meta-Analysis.J Lifestyle Med. 2024 Aug 31;14(2):57-68. doi: 10.15280/jlm.2024.14.2.57. J Lifestyle Med. 2024. PMID: 39280938 Free PMC article. Review.
-
Human-Comparable Sensitivity of Large Language Models in Identifying Eligible Studies Through Title and Abstract Screening: 3-Layer Strategy Using GPT-3.5 and GPT-4 for Systematic Reviews.J Med Internet Res. 2024 Aug 16;26:e52758. doi: 10.2196/52758. J Med Internet Res. 2024. PMID: 39151163 Free PMC article.
-
Nine quick tips for open meta-analyses.PLoS Comput Biol. 2024 Jul 25;20(7):e1012252. doi: 10.1371/journal.pcbi.1012252. eCollection 2024 Jul. PLoS Comput Biol. 2024. PMID: 39052540 Free PMC article.
-
Broccoli Consumption and Risk of Cancer: An Updated Systematic Review and Meta-Analysis of Observational Studies.Nutrients. 2024 May 23;16(11):1583. doi: 10.3390/nu16111583. Nutrients. 2024. PMID: 38892516 Free PMC article. Review.
References
-
- Frunza O, Inkpen D, Matwin S. Building systematic reviews using automatic text classification techniques. Stroudsburg, PA, USA: Proceedings of the 23rd International Conference on Computational Linguistics; 2010.
-
- Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group. 2009. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. BMJ. 2009;339:b2535. The PRISMA Statement. http://www.prisma-statement.org/statement.htm. - PMC - PubMed
-
- GRADEpro. 2015. http://www.guidelinedevelopment.org/. Accessed 2014.
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources