Abstract
Abdominal organ segmentation can help doctors to have a more intuitive observation of the abdominal organ structure and tissue lesion structure, thereby improving the accuracy of disease diagnosis. Accurate segmentation results can provide valuable information for clinical diagnosis and follow-up, such as organ size, location, boundary status, and spatial relationship of multiple organs. Manual labels are precious and difficult to obtain in medical segmentation, so the use of pseudo-labels is an irresistible trend. In this paper, we demonstrate that pseudo-labels are beneficial to enrich the learning samples and enhance the feature learning ability of the model for abdominal organs and tumors. In this paper, we propose a semi-supervised parallel segmentation model that simultaneously aggregates local and global information using parallel modules of CNNS and transformers at high scales. The two-stage strategy and lightweight network make our model extremely efficient. Our method achieved an average DSC score of 89.75% and 3.78% for the organs and tumors, respectively, on the testing set. The average NSD scores were 93.51% and 1.82% for the organs and tumors, respectively. The average running time and area under GPU memory-time curve are 14.85 s and 15963 MB.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Tang, Y., et al.: High-resolution 3D abdominal segmentation with random patch network fusion. Med. Image Anal. 69, 101894 (2021)
Liu, W., Xu, W., Yan, S., Wang, L., Li, H., Yang, H.: Combining self-training and hybrid architecture for semi-supervised abdominal organ segmentation. In: Ma, J., Wang, B. (eds.) FLARE 2022. LNCS, vol. 13816, pp. 281–292. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-23911-3_25
Liu, W., et al.: PHTrans: parallelly aggregating global and local representations for medical image segmentation. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) MICCAI 2022. LNCS, vol. 13435, pp. 235–244. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16443-9_23
Cao, H., et al.: Swin-Unet: Unet-like pure transformer for medical image segmentation. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds.) ECCV 2022. LNCS, vol. 13803, pp. 205–218. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-25066-8_9
Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022 (2021)
Huang, Z., et al.: Revisiting nnU-net for iterative pseudo labeling and efficient sliding window inference. In: Ma, J., Wang, B. (eds.) FLARE 2022. LNCS, vol. 13816, pp. 178–189. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-23911-3_16
Wang, E., Zhao, Y., Wu, Y.: Cascade dual-decoders network for abdominal organs segmentation. In: Ma, J., Wang, B. (eds.) FLARE 2022. LNCS, vol. 13816, pp. 202–213. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-23911-3_18
Ma, J., et al.: Fast and low-GPU-memory abdomen CT organ segmentation: the flare challenge. Med. Image Anal. 82, 102616 (2022)
Ma, J., et al.: Unleashing the strengths of unlabeled data in pan-cancer abdominal organ quantification: the flare22 challenge. arXiv preprint arXiv:2308.05862 (2023)
Clark, K., et al.: The cancer imaging archive (TCIA): maintaining and operating a public information repository. J. Digit. Imaging 26(6), 1045–1057 (2013)
Bilic, P., et al.: The liver tumor segmentation benchmark (LiTS). Med. Image Anal. 84, 102680 (2023)
Simpson, A.L., et al.: A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv preprint arXiv:1902.09063 (2019)
Heller, N., et al.: The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: results of the KiTS19 challenge. Med. Image Anal. 67, 101821 (2021)
Heller, N., et al.: An international challenge to use artificial intelligence to define the state-of-the-art in kidney and kidney tumor segmentation in CT imaging. Am. Soc. Clin. Oncol. 38(6), 626 (2020)
Gatidis, S., et al.: A whole-body FDG-PET/CT dataset with manually annotated tumor lesions. Sci. Data 9(1), 601 (2022)
Gatidis, S., et al.: The autopet challenge: towards fully automated lesion segmentation in oncologic PET/CT imaging. Preprint at Research Square (Nature Portfolio) (2023)
Wasserthal, J., et al.: TotalSegmentator: robust segmentation of 104 anatomic structures in CT images. Radiol. Artif. Intell. 5(5), e230024 (2023)
Ma, J., et al.: AbdomenCT-1K: is abdominal organ segmentation a solved problem? IEEE Trans. Pattern Anal. Mach. Intell. 44(10), 6695–6714 (2022)
Yushkevich, P.A., Gao, Y., Gerig, G.: ITK-snap: an interactive tool for semi-automatic segmentation of multi-modality biomedical images. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3342–3345 (2016)
Isensee, F., Jaeger, P.F., Kohl, S.A.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)
Ma, J., He, Y., Li, F., Han, L., You, C., Wang, B.: Segment anything in medical images. Nat. Commun. 15, 654 (2024)
Pavao, A., et al.: CodaLab competitions: an open source platform to organize scientific challenges. J. Mach. Learn. Res. 24(198), 1–6 (2023)
Acknowledgements
The authors of this paper declare that the segmentation method they implemented for participation in the FLARE 2023 challenge has not used any pre-trained models nor additional datasets other than those provided by the organizers. The proposed solution is fully automatic without any manual intervention. We thank all the data owners for making the CT scans publicly available and CodaLab [22] for hosting the challenge platform.
This work was supported by National Natural Science Foundation of China (62271149), Fujian Provincial Natural Science Foundation project(2021J02019, 2021J01578).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Chen, Y., Wu, Z., Chen, H., Yang, M. (2024). Conformer: A Parallel Segmentation Network Combining Swin Transformer and Convolutional Neutral Network. In: Ma, J., Wang, B. (eds) Fast, Low-resource, and Accurate Organ and Pan-cancer Segmentation in Abdomen CT. FLARE 2023. Lecture Notes in Computer Science, vol 14544. Springer, Cham. https://doi.org/10.1007/978-3-031-58776-4_20
Download citation
DOI: https://doi.org/10.1007/978-3-031-58776-4_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-58775-7
Online ISBN: 978-3-031-58776-4
eBook Packages: Computer ScienceComputer Science (R0)