iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://unpaywall.org/10.1007/978-3-030-98253-9_18
Multimodal PET/CT Tumour Segmentation and Prediction of Progression-Free Survival Using a Full-Scale UNet with Attention | SpringerLink
Skip to main content

Multimodal PET/CT Tumour Segmentation and Prediction of Progression-Free Survival Using a Full-Scale UNet with Attention

  • Conference paper
  • First Online:
Head and Neck Tumor Segmentation and Outcome Prediction (HECKTOR 2021)

Abstract

Segmentation of head and neck (H&N) tumours and prediction of patient outcome are crucial for patient’s disease diagnosis and treatment monitoring. Current developments of robust deep learning models are hindered by the lack of large multi-centre, multi-modal data with quality annotations. The MICCAI 2021 HEad and neCK TumOR (HECKTOR) segmentation and outcome prediction challenge creates a platform for comparing segmentation methods of the primary gross target volume on fluoro-deoxyglucose (FDG)-PET and Computed Tomography images and prediction of progression-free survival in H&N oropharyngeal cancer. For the segmentation task, we proposed a new network based on an encoder-decoder architecture with full inter- and intra-skip connections to take advantage of low-level and high-level semantics at full scales. Additionally, we used Conditional Random Fields as a post-processing step to refine the predicted segmentation maps. We trained multiple neural networks for tumor volume segmentation, and these segmentations were ensembled achieving an average Dice Similarity Coefficient of 0.75 in cross-validation, and 0.76 on the challenge testing data set. For prediction of patient progression free survival task, we propose a Cox proportional hazard regression combining clinical, radiomic, and deep learning features. Our survival prediction model achieved a concordance index of 0.82 in cross-validation, and 0.62 on the challenge testing data set.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Oreiller, V., et al.: Head and neck tumor segmentation in PET/CT: the HECKTOR challenge. Med. Image Anal. (2021). (under revision)

    Google Scholar 

  2. Andrearczyk, V., et al.: Overview of the HECKTOR challenge at MICCAI 2021: automatic head and neck tumor segmentation and outcome prediction in PET/CT images. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 1–37. Springer, Cham (2022)

    Google Scholar 

  3. Huang, B., et al.: Fully automated delineation of gross tumor volume for head and neck cancer on PET-CT using deep learning: a dual-center study. Contrast Media Molec. Imaging 2018, Article ID 8923028, 12 (2018)

    Google Scholar 

  4. Andrearczyk, V., et al.: Automatic segmentation of head and neck tumors and nodal metastases in PET-CT scans. In: Proceedings of the Third Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research, vol. 121, pp. 33–43 (2020)

    Google Scholar 

  5. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  6. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2015, pp. 3431–3440 (2015). https://doi.org/10.1109/CVPR.2015.7298965

  7. Andrearczyk, V., et al.: Overview of the HECKTOR challenge at MICCAI 2020: automatic head and neck tumor segmentation in PET/CT. In: Andrearczyk, V., Oreiller, V., Depeursinge, A. (eds.) HECKTOR 2020. LNCS, vol. 12603, pp. 1–21. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67194-5_1

    Chapter  Google Scholar 

  8. Iantsen, A., Visvikis, D., Hatt, M.: Squeeze-and-excitation normalization for automated delineation of head and neck primary tumors in combined PET and CT images. In: Andrearczyk, V., Oreiller, V., Depeursinge, A. (eds.) HECKTOR 2020. LNCS, vol. 12603, pp. 37–43. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67194-5_4

    Chapter  Google Scholar 

  9. Huang, H., et al.: UNet 3+: a full-scale connected UNet for medical image segmentation. In: Proceedings of the ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020, pp. 1055–1059 (2020)

    Google Scholar 

  10. Jadon, S.: A survey of loss functions for semantic segmentation. In: IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB) 2020, pp. 1–7 (2020)

    Google Scholar 

  11. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollar, P.: Focal loss for dense object detection. arxiv 2017. arXiv preprint arXiv:1708.02002 (2002)

  12. Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE TPAMI 26, 1124–1137 (2004)

    Article  Google Scholar 

  13. Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation (2017). https://doi.org/10.17863/CAM.6936

  14. Baek, S., He, Y., Allen, B.G., et al.: Deep segmentation networks predict survival of non-small cell lung cancer. Sci. Rep. 9(1), 17286 (2019). Accessed 21 Nov 2019

    Google Scholar 

  15. Afshar, P., Mohammadi, A., Plataniotis, K.N., Oikonomou, A., Benali, H.: From handcrafted to deep-learning-based cancer radiomics: challenges and opportunities. IEEE Signal Process. Mag. 36(4), 132–160 (2019)

    Article  Google Scholar 

  16. Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J.: UNet++: a nested u-net architecture for medical image segmentation. In: Stoyanov, D., et al. (eds.) DLMIA/ML-CDS -2018. LNCS, vol. 11045, pp. 3–11. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00889-5_1

    Chapter  Google Scholar 

  17. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. CoRR, vol. abs/1709.01507 (2017)

    Google Scholar 

  18. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Semantic image segmentation with deep convolutional nets and fully connected crfs. In: Proceedings of the International Conference on Learning Representations (ICLR) (2015)

    Google Scholar 

  19. Akai, H., et al.: Predicting prognosis of resected hepatocellular carcinoma by radiomics analysis with random survival forest. Diagn. Interv. Imag. 99(10), 643–651 (2018). Epub 2018 Jun 14 PMID: 29910166

    Article  Google Scholar 

  20. Qiu, X., Gao, J., Yang, J., et al.: A comparison study of machine learning (random survival forest) and classic statistic (Cox proportional hazards) for predicting progression in high-grade glioma after proton and carbon ion radiotherapy. Front Oncol. 10, 551420 (2020). Accessed 30 Oct 2020

    Google Scholar 

  21. Katzman, J.L., Shaham, U., Cloninger, A., et al.: DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network. BMC Med. Res. Methodol. 18, 24 (2018)

    Article  Google Scholar 

  22. Kim, D.W., Lee, S., Kwon, S., et al.: Deep learning-based survival prediction of oral cancer patients. Sci. Rep. 9, 6994 (2019)

    Article  Google Scholar 

  23. Kang, S.R., et al.: Survival prediction of non-small cell lung cancer by deep learning model integrating clinical and positron emission tomography data [abstract]. In: Proceedings of the AACR Virtual Special Conference on Artificial Intelligence, Diagnosis, and Imaging, 13–14 January 2021. AACR; Clin. Cancer Res. 27(5 Suppl), Abstract nr PO-029 (2021)

    Google Scholar 

  24. Nadeau, C., Bengio, Y.: Inference for the generalization error. Mach. Learn. 52, 239–281 (2003)

    Article  Google Scholar 

  25. Abraham, N., Khan, N.: A novel focal tversky loss function with improved attention u-net for lesion segmentation. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 683–687 (2019)

    Google Scholar 

  26. Swierczynski, P., et al.: A level-set approach to joint image segmentation and registration with application to CT lung imaging. Comput. Med. Imaging Graph. 65, 58–68 (2018)

    Article  Google Scholar 

  27. Irving, B., et al.: Pieces-of-parts for supervoxel segmentation with global context: Application to DCE-MRI tumour delineation. Med. Image Anal. 32, 69–83 (2016)

    Article  Google Scholar 

  28. Zhong, Z., et al.: 3D fully convolutional networks for co-segmentation of tumors on PET-CT images. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 228–231 (2018)

    Google Scholar 

Download references

Acknowledgment

This work was supported by the EPSRC grant number EP/S024093/1 and the Centre for Doctoral Training in Sustainable Approaches to Biomedical Science: Responsible and Reproducible Research (SABS: R\(^{3}\)) Doctoral Training Centre, University of Oxford. The authors acknowledge the HECKTOR 2021 challenge for the free publicly available PET/CT images and clinical data used in this study [1].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emmanuelle Bourigault .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bourigault, E., McGowan, D.R., Mehranian, A., Papież, B.W. (2022). Multimodal PET/CT Tumour Segmentation and Prediction of Progression-Free Survival Using a Full-Scale UNet with Attention. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds) Head and Neck Tumor Segmentation and Outcome Prediction. HECKTOR 2021. Lecture Notes in Computer Science, vol 13209. Springer, Cham. https://doi.org/10.1007/978-3-030-98253-9_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-98253-9_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-98252-2

  • Online ISBN: 978-3-030-98253-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics