Abstract
As a new imaging modality, optical coherence tomography angiography (OCTA) can fully explore the characteristics of retinal blood flow. Considering the inconvenience of acquiring OCTA images and inevitable mechanical artifacts, we introduce deep learning to generate OCTA from OCT. In this paper, we propose a texture-guided down- and up-sampling model based on U-Net for OCT-to-OCTA generation. A novel texture-guided sampling block is proposed by combining the extracted texture features and content-adaptive convolutions. The corresponding down-sampling and up-sampling operations would preserve more textural details during the convolutions and deconvolutions, respectively. Then a deeply-supervised texture-guided U-Net is constructed by substituting the traditional convolution with the texture-guided sampling blocks. Moreover, the image Euclidean distance is utilized to construct the loss function, which is more robust to noise and could explore more useful similarities involved in OCT and OCTA images. The dataset containing paired OCT and OCTA images from 489 eyes diagnosed with various retinal diseases is used to evaluate the performance of the proposed network. The results based on cross validation experiments demonstrate the stability and superior performances of the proposed model comparing with state-of-the-art semantic segmentation models and GANs.
The authors declare no conflicts of interest. This work was supported in part by National Science Foundation of China under Grants No. 62072241, in part by Natural Science Foundation of Jiangsu Province under Grant No. BK20180069, in part by Six talent peaks project in Jiangsu Province under Grant No. SWYY-056, and in part by National Institutes of Health Grant No. P30-EY026877.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
de Carlo, T.E., Romano, A., Waheed, N.K., Duker, J.S.: A review of optical coherence tomography angiography (octa). Int. J. Retina Vitreous 1(1), 5 (2015). https://doi.org/10.1186/s40942-015-0005-8
Dsw, T., Gsw, T., Agrawal, R., et al.: Optical coherence tomographic angiography in type 2 diabetes and diabetic retinopathy, JAMA Ophthalmol. 135(4), 306–312 (2017). https://doi.org/10.1001/jamaophthalmol.2016.5877
Lee, C.S., et al.: Generating retinal flow maps from structural optical coherence tomography with artificial intelligence, CoRR abs/1802.08925 (2018). arXiv:1802.08925
Rabiolo, A., et al.: Macular perfusion parameters in different angiocube sizes: does the size matter in quantitative optical coherence tomography angiography? Invest. Opthalmol. Vis. Sci. 59, 231 (2018). https://doi.org/10.1167/iovs.17-22359
Kadomoto, S., Uji, A., Muraoka, Y., Akagi, T., Tsujikawa, A.: Enhanced visualization of retinal microvasculature in optical coherence tomography angiography imaging via deep learning. J. Clin. Med. 9, 1322 (2020). https://doi.org/10.3390/jcm9051322
Zhang, Q., et al.: Wide-field optical coherence tomography based microangiography for retinal imaging. Sci. Rep. 6, 22017 (2016). https://doi.org/10.1038/srep22017
Jiang, Z., et al.: Comparative study of deep learning models for optical coherence tomography angiography. Biomed. Opt. Express 11(3), 1580–1597 (2020). https://doi.org/10.1364/BOE.387807
Ting, D.: Artificial intelligence and deep learning in ophthalmology, Br. J. Ophthalmol. 103 (2018) bjophthalmol-2018. https://doi.org/10.1136/bjophthalmol-2018-313173
Xi, L.: A deep learning based pipeline for optical coherence tomography angiography. J. Biophotonics 12 (2019). https://doi.org/10.1002/jbio.201900008
Goodfellow, I.J.: Generative adversarial networks (2014). arXiv:1406.2661
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein GAN (2017). arXiv:1701.07875
Radford, M., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks (2015). arXiv:1511.06434
Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks (2017). arXiv:1703.10593
Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks (2017). arXiv:1703.10593
Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks (2016). arXiv:1611.07004
Li, P.L., et al.: Deep learning algorithm for generating optical coherence tomography angiography (OCTA) maps of the retinal vasculature. In: Zelinski, M.E., Taha, T.M., Howe, J., Awwal, A.A.S., Iftekharuddin, K.M. (eds.), Applications of Machine Learning 2020, vol. 11511, International Society for Optics and Photonics, SPIE, 2020, pp. 39–49. https://doi.org/10.1117/12.2568629
Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: CNN-generated images are surprisingly easy to spot... for now (2019). arXiv:1912.11035
Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation, CoRR abs/1505.04597 (2015). arXiv:1505.04597
Yang, J., Liu, P., Duan, L., Hu, Y., Liu, J.: Deep learning enables extraction of capillary-level angiograms from single oct volume (2019). arXiv:1906.07091
Saeedan, F., Weber, N., Goesele, M., Roth, S.: Detail-preserving pooling in deep networks, CoRR abs/1804.04076 (2018). arXiv:1804.04076
Weber, N., Waechter, M., Amend, S.C., Guthe, S., Goesele, M.: Rapid, detail-preserving image downscaling, ACM Trans. Graph. 35 (6) (2016). https://doi.org/10.1145/2980179.2980239
Su, H., Jampani, V., Sun, D., Gallo, O., Learned-Miller, E.G., Kautz, J.: Pixel-adaptive convolutional neural networks, CoRR abs/1904.05373 (2019). arXiv:1904.05373
Mostayed, A., Wee, W., Zhou, X.: Content-adaptive u-net architecture for medical image segmentation. In: International Conference on Computational Science and Computational Intelligence (CSCI), pp. 698–702 (2019)
Wang, L., Zhang, Y., Feng, J.: On the Euclidean distance of images. IEEE Trans. Pattern Anal. Mach. Intell. 27(8), 1334–1339 (2005)
Nailon, W.H.: Texture analysis methods for medical image characterisation. Biomed. Imaging 75, 100 (2010)
Humeau-Heurtier, A.: Texture feature extraction methods: a survey. IEEE Access 7, 8975–9000 (2019)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization (2014). arXiv:1412.6980
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Zhang, L., Zhang, L., Mou, X., Zhang, D.: FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 20(8), 2378–2386 (2011)
Lin, G., Milan, A., Shen, C., Reid, I.: Refinenet: Multi-path refinement networks for high-resolution semantic segmentation (2016). arXiv:1611.06612
Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation (2018). arXiv:1802.02611
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation (2014). arXiv:1411.4038
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, Z., Ji, Z., Chen, Q., Yuan, S., Fan, W. (2021). Texture-Guided U-Net for OCT-to-OCTA Generation. In: Ma, H., et al. Pattern Recognition and Computer Vision. PRCV 2021. Lecture Notes in Computer Science(), vol 13022. Springer, Cham. https://doi.org/10.1007/978-3-030-88013-2_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-88013-2_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88012-5
Online ISBN: 978-3-030-88013-2
eBook Packages: Computer ScienceComputer Science (R0)