iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/978-3-030-88013-2_4
Texture-Guided U-Net for OCT-to-OCTA Generation | SpringerLink
Skip to main content

Texture-Guided U-Net for OCT-to-OCTA Generation

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 13022))

Included in the following conference series:

Abstract

As a new imaging modality, optical coherence tomography angiography (OCTA) can fully explore the characteristics of retinal blood flow. Considering the inconvenience of acquiring OCTA images and inevitable mechanical artifacts, we introduce deep learning to generate OCTA from OCT. In this paper, we propose a texture-guided down- and up-sampling model based on U-Net for OCT-to-OCTA generation. A novel texture-guided sampling block is proposed by combining the extracted texture features and content-adaptive convolutions. The corresponding down-sampling and up-sampling operations would preserve more textural details during the convolutions and deconvolutions, respectively. Then a deeply-supervised texture-guided U-Net is constructed by substituting the traditional convolution with the texture-guided sampling blocks. Moreover, the image Euclidean distance is utilized to construct the loss function, which is more robust to noise and could explore more useful similarities involved in OCT and OCTA images. The dataset containing paired OCT and OCTA images from 489 eyes diagnosed with various retinal diseases is used to evaluate the performance of the proposed network. The results based on cross validation experiments demonstrate the stability and superior performances of the proposed model comparing with state-of-the-art semantic segmentation models and GANs.

The authors declare no conflicts of interest. This work was supported in part by National Science Foundation of China under Grants No. 62072241, in part by Natural Science Foundation of Jiangsu Province under Grant No. BK20180069, in part by Six talent peaks project in Jiangsu Province under Grant No. SWYY-056, and in part by National Institutes of Health Grant No. P30-EY026877.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. de Carlo, T.E., Romano, A., Waheed, N.K., Duker, J.S.: A review of optical coherence tomography angiography (octa). Int. J. Retina Vitreous 1(1), 5 (2015). https://doi.org/10.1186/s40942-015-0005-8

  2. Dsw, T., Gsw, T., Agrawal, R., et al.: Optical coherence tomographic angiography in type 2 diabetes and diabetic retinopathy, JAMA Ophthalmol. 135(4), 306–312 (2017). https://doi.org/10.1001/jamaophthalmol.2016.5877

  3. Lee, C.S., et al.: Generating retinal flow maps from structural optical coherence tomography with artificial intelligence, CoRR abs/1802.08925 (2018). arXiv:1802.08925

  4. Rabiolo, A., et al.: Macular perfusion parameters in different angiocube sizes: does the size matter in quantitative optical coherence tomography angiography? Invest. Opthalmol. Vis. Sci. 59, 231 (2018). https://doi.org/10.1167/iovs.17-22359

  5. Kadomoto, S., Uji, A., Muraoka, Y., Akagi, T., Tsujikawa, A.: Enhanced visualization of retinal microvasculature in optical coherence tomography angiography imaging via deep learning. J. Clin. Med. 9, 1322 (2020). https://doi.org/10.3390/jcm9051322

  6. Zhang, Q., et al.: Wide-field optical coherence tomography based microangiography for retinal imaging. Sci. Rep. 6, 22017 (2016). https://doi.org/10.1038/srep22017

  7. Jiang, Z., et al.: Comparative study of deep learning models for optical coherence tomography angiography. Biomed. Opt. Express 11(3), 1580–1597 (2020). https://doi.org/10.1364/BOE.387807

  8. Ting, D.: Artificial intelligence and deep learning in ophthalmology, Br. J. Ophthalmol. 103 (2018) bjophthalmol-2018. https://doi.org/10.1136/bjophthalmol-2018-313173

  9. Xi, L.: A deep learning based pipeline for optical coherence tomography angiography. J. Biophotonics 12 (2019). https://doi.org/10.1002/jbio.201900008

  10. Goodfellow, I.J.: Generative adversarial networks (2014). arXiv:1406.2661

  11. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein GAN (2017). arXiv:1701.07875

  12. Radford, M., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks (2015). arXiv:1511.06434

  13. Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks (2017). arXiv:1703.10593

  14. Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks (2017). arXiv:1703.10593

  15. Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks (2016). arXiv:1611.07004

  16. Li, P.L., et al.: Deep learning algorithm for generating optical coherence tomography angiography (OCTA) maps of the retinal vasculature. In: Zelinski, M.E., Taha, T.M., Howe, J., Awwal, A.A.S., Iftekharuddin, K.M. (eds.), Applications of Machine Learning 2020, vol. 11511, International Society for Optics and Photonics, SPIE, 2020, pp. 39–49. https://doi.org/10.1117/12.2568629

  17. Wang, S.-Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: CNN-generated images are surprisingly easy to spot... for now (2019). arXiv:1912.11035

  18. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation, CoRR abs/1505.04597 (2015). arXiv:1505.04597

  19. Yang, J., Liu, P., Duan, L., Hu, Y., Liu, J.: Deep learning enables extraction of capillary-level angiograms from single oct volume (2019). arXiv:1906.07091

  20. Saeedan, F., Weber, N., Goesele, M., Roth, S.: Detail-preserving pooling in deep networks, CoRR abs/1804.04076 (2018). arXiv:1804.04076

  21. Weber, N., Waechter, M., Amend, S.C., Guthe, S., Goesele, M.: Rapid, detail-preserving image downscaling, ACM Trans. Graph. 35 (6) (2016). https://doi.org/10.1145/2980179.2980239

  22. Su, H., Jampani, V., Sun, D., Gallo, O., Learned-Miller, E.G., Kautz, J.: Pixel-adaptive convolutional neural networks, CoRR abs/1904.05373 (2019). arXiv:1904.05373

  23. Mostayed, A., Wee, W., Zhou, X.: Content-adaptive u-net architecture for medical image segmentation. In: International Conference on Computational Science and Computational Intelligence (CSCI), pp. 698–702 (2019)

    Google Scholar 

  24. Wang, L., Zhang, Y., Feng, J.: On the Euclidean distance of images. IEEE Trans. Pattern Anal. Mach. Intell. 27(8), 1334–1339 (2005)

    Article  Google Scholar 

  25. Nailon, W.H.: Texture analysis methods for medical image characterisation. Biomed. Imaging 75, 100 (2010)

    Google Scholar 

  26. Humeau-Heurtier, A.: Texture feature extraction methods: a survey. IEEE Access 7, 8975–9000 (2019)

    Article  Google Scholar 

  27. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization (2014). arXiv:1412.6980

  28. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  29. Zhang, L., Zhang, L., Mou, X., Zhang, D.: FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 20(8), 2378–2386 (2011)

    Article  MathSciNet  Google Scholar 

  30. Lin, G., Milan, A., Shen, C., Reid, I.: Refinenet: Multi-path refinement networks for high-resolution semantic segmentation (2016). arXiv:1611.06612

  31. Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation (2018). arXiv:1802.02611

  32. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation (2014). arXiv:1411.4038

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zexuan Ji .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Z., Ji, Z., Chen, Q., Yuan, S., Fan, W. (2021). Texture-Guided U-Net for OCT-to-OCTA Generation. In: Ma, H., et al. Pattern Recognition and Computer Vision. PRCV 2021. Lecture Notes in Computer Science(), vol 13022. Springer, Cham. https://doi.org/10.1007/978-3-030-88013-2_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88013-2_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88012-5

  • Online ISBN: 978-3-030-88013-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics