An Adversarial and Densely Dilated Network for Connectomes Segmentation
Abstract
:1. Introduction
- As far as we know, adversarial neural network is at the first time applied to connectomes segmentation with EM images. The adversarial training approach enhances the performance without adding any complexity to the model used at test time.
- For connectomes segmentation problem, we combine U-Net architecture with dilated dense block which takes the advantage of dense connection and dilated network. Compared with other U-net-based models, it enlarges the receptive fields and saves computation expenses.
- In contrast to the classic GAN with a single loss function, we combine the GAN objective with the dice loss for alleviating the blurry effects. Therefore, the tasks of the segmentor are to fool the discriminator as well as to generate more accurate images.
- The ADDN is an end-to-end architecture trained and with can achieve favorable results without further smoothing or post-processing. We demonstrate that ADDN performs greatly by comparing the state-of-the-art EM segmentation methods on two benchmark datasets.
2. Related Work
3. Proposed Method
3.1. Overview
3.2. Training Objectives
3.3. Segmentation Network
3.4. Discriminator Network
3.5. Evaluation Metric
3.6. Implementation Detail
4. Experiment
4.1. Datasets
4.2. Data Augmentation
4.3. Ablation Study
4.3.1. The Effectiveness of Adversarial Training
4.3.2. The Effectiveness of Proposed Segmentor Network
4.3.3. Hyperparameter Study
4.4. Performance Comparison
5. Discussion
6. Conclusions
Author Contributions
Acknowledgments
Conflicts of Interest
References
- Sporns, O.; Tononi, G.; Kötter, R. The human connectome: A structural description of the human brain. PLoS Comput. Biol. 2005, 1, e42. [Google Scholar] [CrossRef] [PubMed]
- Cardona, A.; Saalfeld, S.; Preibisch, S.; Schmid, B.; Cheng, A.; Pulokas, J.; Tomancak, P.; Hartenstein, V. An integrated micro-and macroarchitectural analysis of the Drosophila brain by computer-assisted serial section electron microscopy. PLoS Biol. 2010, 8, e1000502. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jurrus, E.; Whitaker, R.; Jones, B.W.; Marc, R.; Tasdizen, T. An optimal-path approach for neural circuit reconstruction. In Proceedings of the 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Paris, France, 14–17 May 2008; pp. 1609–1612. [Google Scholar]
- Harris, K.M.; Perry, E.; Bourne, J.; Feinberg, M.; Ostroff, L.; Hurlburt, J. Uniform serial sectioning for transmission electron microscopy. J. Neurosci. 2006, 26, 12101–12103. [Google Scholar] [CrossRef] [PubMed]
- Jurrus, E.; Paiva, A.R.; Watanabe, S.; Anderson, J.R.; Jones, B.W.; Whitaker, R.T.; Jorgensen, E.M.; Marc, R.E.; Tasdizen, T. Detection of neuron membranes in electron microscopy images using a serial neural network architecture. Med. Image Anal. 2010, 14, 770–783. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Seyedhosseini, M.; Kumar, R.; Jurrus, E.; Giuly, R.; Ellisman, M.; Pfister, H.; Tasdizen, T. Detection of neuron membranes in electron microscopy images using multi-scale context and radon-like features. In Proceedings of the International Conference on Medical Image Computing and Computer-assisted Intervention, Toronto, ON, Canada, 18–22 September 2011; pp. 670–677. [Google Scholar]
- Jain, V.; Murray, J.F.; Roth, F.; Turaga, S.; Zhigulin, V.; Briggman, K.L.; Helmstaedter, M.N.; Denk, W.; Seung, H.S. Supervised learning of image restoration with convolutional networks. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar]
- Ciresan, D.; Giusti, A.; Gambardella, L.M.; Schmidhuber, J. Deep neural networks segment neuronal membranes in electron microscopy images. Adv. Neural Inf. Proc. Syst. 2012. Available online: http://papers.nips.cc/paper/4741-deep-neural-networks (accessed on 10 August 2018).
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 8–12 June 2015; pp. 431–3440. [Google Scholar]
- Chen, H.; Qi, X.; Cheng, J.Z.; Heng, P.A. Deep Contextual Networks for Neuronal Structure Segmentation. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 1167–1173. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Quan, T.M.; Hilderbrand, D.G.; Jeong, W.K. FusionNet: A deep fully residual convolutional neural network for image segmentation in connectomics. arXiv, 2016; arXiv:1612.05360. [Google Scholar]
- Drozdzal, M.; Vorontsov, E.; Chartrand, G.; Kadoury, S.; Pal, C. The importance of skip connections in biomedical image segmentation. In Proceedings of the International Workshop on Deep Learning in Medical Image Analysis, Athens, Greece, 21 October 2016; pp. 179–187. [Google Scholar]
- Fakhry, A.; Zeng, T.; Ji, S. Residual deconvolutional networks for brain electron microscopy image segmentation. IEEE Trans. Med. Imaging 2017, 36, 447–456. [Google Scholar] [CrossRef] [PubMed]
- Zhao, X.; Wu, Y.; Song, G.; Li, Z.; Fan, Y.; Zhang, Y. Brain tumor segmentation using a fully convolutional neural network with conditional random fields. In Proceedings of the International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Athens, Greece, 17 October 2016; pp. 75–87. [Google Scholar]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 694–711. [Google Scholar]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. arXiv. 2017. Available online: http://openaccess.thecvf.com/content_cvpr_2017/papers/Isola_Image-To-Image_Translation_With_CVPR_2017_paper.pdf (accessed on 10 August 2018).
- Rezaei, M.; Harmuth, K.; Gierke, W.; Kellermeier, T.; Fischer, M.; Yang, H.; Meinel, C. Conditional Adversarial Network for Semantic Segmentation of Brain Tumor. arXiv, 2017; arXiv:1708.05227. [Google Scholar]
- Yu, F.; Koltun, V. Multi-scale context aggregation by dilated convolutions. arXiv, 2015; arXiv:1511.07122. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. CVPR 2017, 1, 3. [Google Scholar]
- Jégou, S.; Drozdzal, M.; Vazquez, D.; Romero, A.; Bengio, Y. The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1175–1183. [Google Scholar]
- Osher, S.; Sethian, J.A. Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys. 1988, 79, 12–49. [Google Scholar] [CrossRef] [Green Version]
- Zhang, K.; Zhang, L.; Lam, K.M.; Zhang, D. A level set approach to image segmentation with intensity inhomogeneity. IEEE Trans. Cybern. 2016, 46, 546–557. [Google Scholar] [CrossRef] [PubMed]
- Min, H.; Jia, W.; Wang, X.F.; Zhao, Y.; Hu, R.X.; Luo, Y.T.; Xue, F.; Lu, J.T. An intensity-texture model based level set method for image segmentation. Pattern Recognit. 2015, 48, 1547–1562. [Google Scholar] [CrossRef]
- Stutz, D.; Hermans, A.; Leibe, B. Superpixels: An evaluation of the state-of-the-art. Comput. Vis. Image Underst. 2018, 166, 1–27. [Google Scholar] [CrossRef] [Green Version]
- Ciecholewski, M. Automated coronal hole segmentation from Solar EUV Images using the watershed transform. J. Vis. Commun. Image Represent. 2015, 33, 203–218. [Google Scholar] [CrossRef]
- Cousty, J.; Bertrand, G.; Najman, L.; Couprie, M. Watershed cuts: Thinnings, shortest path forests, and topological watersheds. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 925–939. [Google Scholar] [CrossRef] [PubMed]
- Levinshtein, A.; Stere, A.; Kutulakos, K.N.; Fleet, D.J.; Dickinson, S.J.; Siddiqi, K. Turbopixels: Fast superpixels using geometric flows. IIEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 2290–2297. [Google Scholar] [CrossRef] [PubMed]
- Vazquez, L.; Sapiro, G.; Randall, G. Segmenting neurons in electronic microscopy via geometric tracing. In Proceedings of the 1998 International Conference on Image Processing (ICIP98), Chicago, IL, USA, 7 October 1998; pp. 814–818. [Google Scholar]
- Vu, N.; Manjunath, B. Graph cut segmentation of neuronal structures from transmission electron micrographs. In Proceedings of the 2008 15th IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008; pp. 725–728. [Google Scholar]
- Kaynig, V.; Fuchs, T.J.; Buhmann, J.M. Geometrical consistent 3D tracing of neuronal processes in ssTEM data. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Toronto, ON, Canada, 18–22 September 2010; pp. 209–216. [Google Scholar]
- Nekrasov, V.; Ju, J.; Choi, J. Global deconvolutional networks for semantic segmentation. arXiv, 2016; arXiv:1602.03930. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Drozdzal, M.; Chartrand, G.; Vorontsov, E.; Shakeri, M.; Di Jorio, L.; Tang, A.; Romero, A.; Bengio, Y.; Pal, C.; Kadoury, S. Learning Normalized Inputs for Iterative Estimation in Medical Image Segmentation. Med. Image Anal. 2018, 44, 1–13. [Google Scholar] [CrossRef] [PubMed]
- Zhang, H.; Xu, T.; Li, H.; Zhang, S.; Huang, X.; Wang, X.; Metaxas, D. Stackgan: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks. 2017. Available online: http://openaccess.thecvf.com/content_ICCV_2017/papers/Zhang_StackGAN_Text_to_ICCV_2017_paper.pdf (accessed on 10 August 2018).
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. CVPR 2016, 1, 4. [Google Scholar]
- Luc, P.; Couprie, C.; Chintala, S.; Verbeek, J. Semantic segmentation using adversarial networks. arXiv, 2016; arXiv:1611.08408. [Google Scholar]
- Moeskops, P.; Veta, M.; Lafarge, M.W.; Eppenhof, K.A.; Pluim, J.P. Adversarial training and dilated convolutions for brain MRI segmentation. In Proceedings of the International Workshop on Deep Learning in Medical Image Analysis, Québec, QC, Canada, 14 September 2017; pp. 56–64. [Google Scholar]
- Li, Z.; Wang, Y.; Yu, J. Brain Tumor Segmentation Using an Adversarial Network. In Proceedings of the International MICCAI Brainlesion Workshop, Quebec, QC, Canada, 14 September 2017; pp. 123–132. [Google Scholar]
- Dai, W.; Doyle, J.; Liang, X.; Zhang, H.; Dong, N.; Li, Y.; Xing, E.P. Scan: Structure correcting adversarial network for chest x-rays organ segmentation. arXiv, 2017; arXiv:1703.08770. [Google Scholar]
- Kohl, S.; Bonekamp, D.; Schlemmer, H.P.; Yaqubi, K.; Hohenfellner, M.; Hadaschik, B.; Radtke, J.P.; Maier-Hein, K. Adversarial Networks for the Detection of Aggressive Prostate Cancer. arXiv, 2017; arXiv:1702.08014. [Google Scholar]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv, 2014; arXiv:1411.1784. [Google Scholar]
- Arganda-Carreras, I.; Turaga, S.C.; Berger, D.R.; Cireşan, D.; Giusti, A.; Gambardella, L.M.; Schmidhuber, J.; Laptev, D.; Dwivedi, S.; Buhmann, J.M.; et al. Crowdsourcing the creation of image segmentation algorithms for connectomics. Front. Neuroanat. 2015, 9, 142. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, Santiago, Chile, 11–18 December 2015; pp. 1026–1034. [Google Scholar]
- Chollet, F. Keras: Deep Learning Library for Theano and Tensorflow. Available online: https://keras. io/k (accessed on 10 August 2018).
- Lee, K.; Zlateski, A.; Ashwin, V.; Seung, H.S. Recursive training of 2d-3d convolutional networks for neuronal boundary prediction. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 3573–3581. [Google Scholar]
- Simard, P.Y.; Steinkraus, D.; Platt, J.C. Best practices for convolutional neural networks applied to visual document analysis. In Proceedings of the Seventh International Conference on Document Analysis and Recognition, Edinburgh, UK, 3–6 August 2003; p. 958. [Google Scholar]
- Shen, W.; Wang, B.; Jiang, Y.; Wang, Y.; Yuille, A. Multi-stage Multi-recursive-input Fully Convolutional Networks for Neuronal Boundary Detection. arXiv, 2017; arXiv:1703.08493. [Google Scholar] [Green Version]
- Pleiss, G.; Chen, D.; Huang, G.; Li, T.; van der Maaten, L.; Weinberger, K.Q. Memory-efficient implementation of densenets. arXiv, 2017; arXiv:1707.06990. [Google Scholar]
Author | Year | Approach | Pros(+) and cons(−). |
---|---|---|---|
Vu et al. [30] | 2008 | Graph cut algorithm | (+) The algorithm was direct and execution time was very short. |
(−) The final result depended too much on experts’ editing. | |||
Jurrus et al. [5] | 2008 | Optimal-path approach | (+) Such machine learning algorithms helped identify cells automatically considering variability or inconsistency. |
(−) Priority principle for finding path made it less effective especially when neuron numbers are increasing. | |||
Kaynig et al. [31] | 2010 | Random forest | (+) Adding geometrical consistency constraints improved the accuracy of cluster method. |
(−) Geometries were not easily extracted and feature selection by users greatly impacted final results. | |||
Ciresan et al. [8] | 2012 | Deep neuron network | (+) It was the first time using DNN in this filed and improved the speed and accuracy. |
(−) The filter was fixed thus limited information was unitized. | |||
Ronneberger et al. [11] | 2015 | Convolution neuron network | (+) The structure was flexible and symmetric which could gain multi-level information. |
(−) The depth was always limited. | |||
Chen et al. [10] | 2016 | Convolution neuron network | (+) Hierarchical features were extracted for discriminating and localizing such that improved segmentation result. |
(−) The receptive fields were limited by the fixed kernel sizes and its auxiliary classifiers needed careful selections. | |||
Quan et al. [12] | 2016 | Convolution neuron network | (+) Summation-based skip connection was used inspired by residual network and to some extent eased training burden. |
(−) The receptive filed was not significantly increased due to the kernel size and depth. | |||
Drozdzal et al. [34] | 2018 | Convolution neuron network | (+) Utilizing a new FCN for data normalization as preprocessor and designing bottleneck block for increasing depth helped improve results. |
(−) The two FCN took so much processing power. | |||
Fakhry et al. [14] | 2017 | Convolution neuron network | (+) It took the advantages of ResNet. |
(−) The results were not promising and heavily relied on post-processing. |
Layer Type | No. of Filter | Feature Map Size | Kernel Size | No. of Stride | No. of Padding |
---|---|---|---|---|---|
Image input layer | |||||
Concatenation | |||||
Conv-1 | 64 | ||||
BN | |||||
Leaky ReLU | |||||
Conv-2 | 128 | ||||
BN | |||||
Leaky ReLU | |||||
Conv-3 | 256 | ||||
BN | |||||
Leaky ReLU | |||||
Conv-4 | 1 | ||||
Output patch |
DDN | DDN+cGAN | DDN+cGAN+L1 | ADDN | |
---|---|---|---|---|
ISBI 2012 | 0.971 | 0.933 | 0.778 | 0.989 |
Mouse piriform cortex | 0.711 | 0.890 | 0.709 | 0.892 |
ADN | ADDN | AUN | ARN | ADFN | |
---|---|---|---|---|---|
ISBI 2012 | 0.980 | 0.989 | 0.977 | 0.974 | 0.963 |
Mouse piriform cortex | 0.880 | 0.892 | 0.744 | 0.800 | 0.720 |
Method | ADN | ADDN | AUN | ARN | ADFN |
---|---|---|---|---|---|
parameters (M) | 8.9 | 8.9 | 42.0 | 65.6 | 109.3 |
Growth rate | 8 | 12 | 16 | 24 |
---|---|---|---|---|
0.975 | 0.979 | 0.986 | - |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, K.; Zhu, D.; Lu, J.; Luo, Y. An Adversarial and Densely Dilated Network for Connectomes Segmentation. Symmetry 2018, 10, 467. https://doi.org/10.3390/sym10100467
Chen K, Zhu D, Lu J, Luo Y. An Adversarial and Densely Dilated Network for Connectomes Segmentation. Symmetry. 2018; 10(10):467. https://doi.org/10.3390/sym10100467
Chicago/Turabian StyleChen, Ke, Dandan Zhu, Jianwei Lu, and Ye Luo. 2018. "An Adversarial and Densely Dilated Network for Connectomes Segmentation" Symmetry 10, no. 10: 467. https://doi.org/10.3390/sym10100467
APA StyleChen, K., Zhu, D., Lu, J., & Luo, Y. (2018). An Adversarial and Densely Dilated Network for Connectomes Segmentation. Symmetry, 10(10), 467. https://doi.org/10.3390/sym10100467