iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/978-3-031-36616-1_41
Automatic Eye-Tracking-Assisted Chest Radiography Pathology Screening | SpringerLink
Skip to main content

Automatic Eye-Tracking-Assisted Chest Radiography Pathology Screening

  • Conference paper
  • First Online:
Pattern Recognition and Image Analysis (IbPRIA 2023)

Abstract

Chest radiography is increasingly used worldwide to diagnose a series of illnesses targeting the lungs and heart. The high amount of examinations leads to a severe burden on radiologists, which benefit from the introduction of artificial intelligence tools in clinical practice, such as deep learning classification models. Nevertheless, these models are undergoing limited implementation due to the lack of trustworthy explanations that provide insights about their reasoning. In an attempt to increase the level of explainability, the deep learning approaches developed in this work incorporate in their decision process eye-tracking data collected from experts. More specifically, eye-tracking data is used in the form of heatmaps to change the input to the selected classifier, an EfficientNet-b0, and to guide its focus towards relevant parts of the images. Prior to the classification task, UNet-based models are used to perform heatmap reconstruction, making this framework independent of eye-tracking data during inference. The two proposed approaches are applied to all existing public eye-tracking datasets, to our knowledge, regarding chest X-ray screening, namely EGD, REFLACX and CXR-P. For these datasets, the reconstructed heatmaps highlight important anatomical/pathological regions and the area under the curve results are comparable to the state-of-the-art and to the considered baseline. Furthermore, the quality of the explanations derived from the classifier is superior for one of the approaches, which can be attributed to the use of eye-tracking data.

This work was funded by the ERDF - European Regional Development Fund, through the Programa Operacional Regional do Norte (NORTE 2020) and by National Funds through the FCT - Portuguese Foundation for Science and Technology, I.P. within the scope of the CMU Portugal Program (NORTE-01-0247-FEDER-045905) and LA/P/0063/2020.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Allen, P.G., Degrave, A.J., Janizek, J.D. Janizek, J.D., Lee, S.I.: AI for radiographic COVID-19 detection selects shortcuts over signal. Nat. Mach. Intell. 3(7), 610–619 (2021)

    Google Scholar 

  2. Aresta, G., et al.: Automatic lung nodule detection combined with gaze information improves radiologists’ screening performance. IEEE J. Biomed. Health Inform. 24, 2894–2901 (2020)

    Google Scholar 

  3. Bhattacharya, M., Jain, S., Prasanna, P.: RadioTransformer: a cascaded global-focal transformer for visual attention–guided disease classification. In: Avidan, S., Brostow, G., Cisse, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision – ECCV 2022. ECCV 2022. LNCS, vol. 13681, pp. 679–698. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19803-8_40

  4. Huang, G., Liu, Z., Maaten, L.V.D., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 2017-January, 2261–2269, November 2017

    Google Scholar 

  5. Iakubovskii, P.: Segmentation Models Pytorch (2019)

    Google Scholar 

  6. Jiang, P.T., Zhang, C.B., Hou, Q., Cheng, M.M., Wei, Y.: LayerCAM: exploring hierarchical class activation maps for localization. IEEE Trans. Image Process. 30, 5875–5888 (2021)

    Article  Google Scholar 

  7. Jocher, G., et al.: ultralytics/yolov5: v7.0 - YOLOv5 SOTA Realtime Instance Segmentation, November 2022

    Google Scholar 

  8. Karargyris, A., et al.: Creation and validation of a chest X-ray dataset with eye-tracking and report dictation for AI development. Sci. Data 8(1), 1–18 (2021)

    Google Scholar 

  9. Lanfredi, R.B., et al.: Reflacx, a dataset of reports and eye-tracking data for localization of abnormalities in chest x-rays. Sci. Data 9(1), 1–15 (2022)

    Google Scholar 

  10. Moreira, C., Nobre, I.B., Sousa, S.C., Pereira, J.M., Jorge, J.: Improving X-ray diagnostics through eye-tracking and XR. In: Proceedings - 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2022, pp. 450–453 (2022)

    Google Scholar 

  11. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  12. Saab, K., et al.: Observational supervision for medical image classification using gaze data. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12902, pp. 603–614. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87196-3_56

    Chapter  Google Scholar 

  13. Saporta, A., et al.: Benchmarking saliency methods for chest x-ray interpretation. Nat. Mach. Intell. 4(10), 867–878 (2022)

    Google Scholar 

  14. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE International Conference on Computer Vision 2017-October, pp. 618–626, December 2017

    Google Scholar 

  15. Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 97, pp. 6105–6114. PMLR, 09–15 June 2019

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rui Santos .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Santos, R., Pedrosa, J., Mendonça, A.M., Campilho, A. (2023). Automatic Eye-Tracking-Assisted Chest Radiography Pathology Screening. In: Pertusa, A., Gallego, A.J., Sánchez, J.A., Domingues, I. (eds) Pattern Recognition and Image Analysis. IbPRIA 2023. Lecture Notes in Computer Science, vol 14062. Springer, Cham. https://doi.org/10.1007/978-3-031-36616-1_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-36616-1_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-36615-4

  • Online ISBN: 978-3-031-36616-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics