iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://unpaywall.org/10.1007/978-3-030-59716-0_23
Cross-Modality Multi-atlas Segmentation Using Deep Neural Networks | SpringerLink
Skip to main content

Cross-Modality Multi-atlas Segmentation Using Deep Neural Networks

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 (MICCAI 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12263))

Abstract

Both image registration and label fusion in the multi-atlas segmentation (MAS) rely on the intensity similarity between target and atlas images. However, such similarity can be problematic when target and atlas images are acquired using different imaging protocols. High-level structure information can provide reliable similarity measurement for cross-modality images when cooperating with deep neural networks (DNNs). This work presents a new MAS framework for cross-modality images, where both image registration and label fusion are achieved by DNNs. For image registration, we propose a consistent registration network, which can jointly estimate forward and backward dense displacement fields (DDFs). Additionally, an invertible constraint is employed in the network to reduce the correspondence ambiguity of the estimated DDFs. For label fusion, we adapt a few-shot learning network to measure the similarity of atlas and target patches. Moreover, the network can be seamlessly integrated into the patch-based label fusion. The proposed framework is evaluated on the MM-WHS dataset of MICCAI 2017. Results show that the framework is effective in both cross-modality registration and segmentation.

X. Zhuang and L. Huang are co-seniors. This work was funded by the National Natural Science Foundation of China (Grant No. 61971142), and Shanghai Municipal Science and Technology Major Project (Grant No. 2017SHZDZX01).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Avants, B.B., Tustison, N., Song, G.: Advanced normalization tools (ANTs). Insight j 2(365), 1–35 (2009)

    Google Scholar 

  2. Christensen, G.E., Johnson, H.J.: Consistent image registration. IEEE Trans. Med. Imaging 20(7), 568–582 (2001)

    Article  Google Scholar 

  3. Coupé, P., Manjón, J.V., Fonov, V., Pruessner, J., Robles, M., Collins, D.L.: Nonlocal patch-based label fusion for hippocampus segmentation. In: Jiang, T., Navab, N., Pluim, J.P.W., Viergever, M.A. (eds.) MICCAI 2010. LNCS, vol. 6363, pp. 129–136. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15711-0_17

    Chapter  Google Scholar 

  4. Ding, Z., Han, X., Niethammer, M.: VoteNet: a deep learning label fusion method for multi-atlas segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 202–210. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32248-9_23

    Chapter  Google Scholar 

  5. Heinrich, M.P., Jenkinson, M., Papież, B.W., Brady, S.M., Schnabel, J.A.: Towards realtime multimodal fusion for image-guided interventions using self-similarities. In: Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N. (eds.) MICCAI 2013. LNCS, vol. 8149, pp. 187–194. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40811-3_24

    Chapter  Google Scholar 

  6. Hu, Y., et al.: Weakly-supervised convolutional neural networks for multimodal image registration. Med. Image Anal. 49, 1–13 (2018)

    Article  Google Scholar 

  7. Iglesias, J.E., Sabuncu, M.R., Van Leemput, K.: A unified framework for cross-modality multi-atlas segmentation of brain MRI. Med. Image Anal. 17(8), 1181–1191 (2013)

    Article  Google Scholar 

  8. Kasiri, K., Fieguth, P., Clausi, D.A.: Cross modality label fusion in multi-atlas segmentation. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 16–20. IEEE (2014)

    Google Scholar 

  9. Luan, H., Qi, F., Xue, Z., Chen, L., Shen, D.: Multimodality image registration by maximization of quantitative-qualitative measure of mutual information. Pattern Recogn. 41(1), 285–298 (2008)

    Article  MATH  Google Scholar 

  10. Payer, C., Štern, D., Bischof, H., Urschler, M.: Multi-label whole heart segmentation using CNNs and anatomical label configurations. In: Pop, M., et al. (eds.) STACOM 2017. LNCS, vol. 10663, pp. 190–198. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75541-0_20

    Chapter  Google Scholar 

  11. Qin, C., Shi, B., Liao, R., Mansi, T., Rueckert, D., Kamen, A.: Unsupervised deformable registration for multi-modal images via disentangled representations. In: Chung, A.C.S., Gee, J.C., Yushkevich, P.A., Bao, S. (eds.) IPMI 2019. LNCS, vol. 11492, pp. 249–261. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20351-1_19

    Chapter  Google Scholar 

  12. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  13. Sanroma, G., et al.: Learning non-linear patch embeddings with neural networks for label fusion. Med. Image Anal. 44, 143–155 (2018)

    Article  Google Scholar 

  14. Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: Advances in neural Information Processing Systems, pp. 4077–4087 (2017)

    Google Scholar 

  15. Studholme, C., Hill, D.L., Hawkes, D.J.: An overlap invariant entropy measure of 3D medical image alignment. Pattern Recogn. 32(1), 71–86 (1999)

    Article  Google Scholar 

  16. Thirion, J.: Image matching as a diffusion process: an analogy with Maxwell’s demons. Med. Image Anal. 2(3), 243–260 (1998)

    Article  Google Scholar 

  17. Wachinger, C., Navab, N.: Entropy and Laplacian images: structural representations for multi-modal registration. Med. Image Anal. 16(1), 1–17 (2012)

    Article  Google Scholar 

  18. Wang, H., Suh, J.W., Das, S.R., Pluta, J.B., Craige, C., Yushkevich, P.A.: Multi-atlas segmentation with joint label fusion. IEEE Trans. Pattern Anal. Mach. Intell. 35(3), 611–623 (2012)

    Article  Google Scholar 

  19. Warfield, S.K., Zou, K.H., Wells, W.M.: Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation. IEEE Trans. Med. Imaging 23(7), 903–921 (2004)

    Article  Google Scholar 

  20. Xie, L., Wang, J., Dong, M., Wolk, D.A., Yushkevich, P.A.: Improving multi-atlas segmentation by convolutional neural network based patch error estimation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 347–355. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32248-9_39

    Chapter  Google Scholar 

  21. Yang, H., Sun, J., Li, H., Wang, L., Xu, Z.: Neural multi-atlas label fusion: application to cardiac MR images. Med. Image Anal. 49, 60–75 (2018)

    Article  Google Scholar 

  22. Zhuang, X., et al.: Evaluation of algorithms for multi-modality whole heart segmentation: an open-access grand challenge. Med. Image Anal. 58, 101537 (2019)

    Article  Google Scholar 

  23. Zhuang, X., Shen, J.: Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI. Med. Image Anal. 31, 77–87 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Xiahai Zhuang or Liqin Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ding, W., Li, L., Zhuang, X., Huang, L. (2020). Cross-Modality Multi-atlas Segmentation Using Deep Neural Networks. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12263. Springer, Cham. https://doi.org/10.1007/978-3-030-59716-0_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59716-0_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59715-3

  • Online ISBN: 978-3-030-59716-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics