iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/s11045-015-0378-8
High efficiency depth image-based rendering with simplified inpainting-based hole filling | Multidimensional Systems and Signal Processing Skip to main content
Log in

High efficiency depth image-based rendering with simplified inpainting-based hole filling

  • Published:
Multidimensional Systems and Signal Processing Aims and scope Submit manuscript

Abstract

Hole and crack filling is the most important issue in depth-image-based rendering (DIBR) algorithms for generating virtual view images when only one view image and one depth map are available. This paper proposes a priority patch inpainting algorithm for hole filling in DIBR algorithms by generating multiple virtual views. A texture-based interpolation method is applied for crack filling. Then, an inpainting-based algorithm is applied patch by patch for hole filling. A prioritized method for selecting the critical patch is also proposed to reduce computation time. Finally, the proposed method is realized on the compute unified device architecture parallel computing platform which runs on a graphics processing unit. Simulation results show that the proposed algorithm is 51-fold faster for virtual view synthesis and achieves better virtual view quality compared to the traditional DIBR algorithm which contains depth preprocessing, warping, and hole filling.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Application and Requirements on 3D Video Coding (2011). ISO/IEC JTC1/SC29/WG11 N12035.

  • Ahn, I., & Kim, C. (2012). Depth-based disocclusion filling for virtual view synthesis. In Proceedings of IEEE international conference on multimedia and expo (pp. 109–114). Melbourne, Australia.

  • Benzie, P., Watson, J., Surman, P., Rakkolainen, I., Hopf, K., Urey, H., et al. (2007). A survey of 3DTV displays: Techniques and technologies. IEEE Transactions on Circuits and Systems for video Technology, 17(11), 1647–1658.

    Article  Google Scholar 

  • Bertalmio, M., Sapiro, G., Caselles, V., & Ballester, C. (2000). Image inpainting. In Proceedings of ACM conference on computer graphics (pp. 417–424). New Orleans, Louisiana, USA.

  • Bondarev, E., Zinger, S., & de With, P. H. N. (2010). Performance-efficient architecture for free-viewpoint 3dtv receiver. In Proceedings of international conference on consumer electronics (pp. 65–66). Nevada, USA.

  • Criminisi, A., P’erez, P., & Toyama, K. (2004). Region filling and object removal by exemplar-based image inpainting. IEEE Transactions on Image Processing, 13(9), 1200–1212.

    Article  Google Scholar 

  • Chen, K. Y., Tsung, P. K., Lin, P. C., Yang, H. J., & Chen, L. G. (2010). Hybrid motion/depth-oriented inpainting for virtual view synthesis in multiview applications. In Proceedings of IEEE 3DTV (pp. 1–4). Tampere, Finland.

  • Cheng, C., Liu, J., Yuan, H., Yang, X. H., & Liu, W. (2013). A DIBR method based on inverse mapping and depth aided image inpainting. In Proceedings of IEEE China summit and international conference on signal and information processing (pp. 518–522). Beijing, China.

  • Cheng, F. H., Chang, Y. W., & Huang, Y. S. (2011). A hardware architecture for real-time stereoscopic image generation from depth map. In Proceedings of the 2011 international conference on machine learning and cybernetics (pp. 1622–1627). Guilin, China.

  • Chang, K. M., Lin, T. C., & Huang, Y. M. (2012). Parallax-guided disocclusion inpainting for 3D view synthesis. In Proceedings of IEEE international conference on consumer electronics (pp. 398–399). Las Vegas, USA.

  • Daribo, I., & Pesquet, P. B. (2010). Depth-aided image inpainting for novel view synthesis. In Proceedings of IEEE multimedia signal processing workshop (pp. 167–170). Saint-Malo, France.

  • Do, L., Bravo, G., Zinger, S., & de With, P. H. N. (2012). GPU-accelerated real-time free-viewpoint DIBR for 3DTV. IEEE Transactions on Consumer Electronics, 58(2), 633–640.

    Article  Google Scholar 

  • Fehn, C. (2004). Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3D-TV. In Proceedings of SPIE stereoscopic displays and virtual reality system (pp. 93–104). San Jose, California, USA.

  • Fehn, C., Hopf, K., & Quante, B. (2004). Key technologies for an advanced 3DTV system. In Proceedings of SPIE three-dimensional TV, video, and display III (pp. 66–80). Bellingham, WA, USA.

  • Fujii, T., & Tanimoto, M. (2002). Free-viewpoint TV system based on ray-space representation. In Proceedings of SPIE ITCom’2002, international conference on internet, performance and control of networks (pp. 175–189). Boston, MA, USA.

  • FTV-Free viewpoint television (2002). ISO/IEC JTC1/SC29/WG11 M8595.

  • GPGPU. (2014). http://www.GPGPU.org/

  • Isgrò, F., Trucco, E., Kauff, P., & Schreer, O. (2003). Three-dimensional image processing in the future of immersive media. IEEE Transactions on Circuits and Systems for video Technology, 14(3), 388–303.

    Google Scholar 

  • Introduction to 3D Video (2008). ISO/IEC JTC1/SC29/WG11 N9784.

  • Lee, P. C., & Su, E. (2011). Nongeometric distortion smoothing approach for depth map preprocessing. IEEE Transactions on Multimedia, 13(2), 246–254.

    Article  Google Scholar 

  • Luo, K., Li, D. X., Feng, Y. M., & Zhang, M. (2009). Depth-aided inpainting for disocclusion restoration of multiview images using depth-image-based rendering. Journal of Zhejiang University Science A, 10, 1738–1749.

    Article  Google Scholar 

  • Ma, L., Do, L., & de With, P. H. N. (2012). Depth-guided inpainting algorithm for free-viewpoint video. In Proceedings of IEEE international conference on image processing (pp. 1721–1724). Orlando, USA.

  • Meur, O. L., Gautier, J., & Guillemot, C. (2011). Examplar-based inpainting based on local geometry. In Proceedings of IEEE international conference on image processing (pp. 3401–3404). Brussels, Belgium.

  • Muller, K., Merkle, P., & Wiegand, T. (2011). 3-D video representation using depth maps. Proceedings of the IEEE, 99(4), 643–656.

    Article  Google Scholar 

  • NVIDIA GPU Computing Documentation. (2014). https://developer.nvidia.com/

  • Oh, K., Yea, S., & Ho, Y. (2009). Hole filling method using depth based in-painting for view synthesis in free viewpoint television and 3-d video. In Proceedings of IEEE picture coding symposium (pp. 1–4). Chicago, USA.

  • Pan, L. J., Zhu, Y. L., Qian, Z. W., Chen, Q., & Hong, Y. (2015). Real-time virtual view synthesis based on GPU parallel programming. In Proceedings of Chinese control and decision conference (pp. 3687–3690). Qingdao, USA.

  • Reel, S., Cheung, G., Wong, P., & Dooley, L. S. (2013). Joint texture-depth pixel inpainting of disocclusion holes in virtual view synthesis. In Proceedings of Chinese control and decision conference (pp. 1–7). Kaohsiung, Taiwan.

  • Shan, H. Q., Chien, W. D., Wang, H. M., & Yang, J. F. (2015). A homography-based inpainting algorithm for effective depth image based rendering. In Proceedings of signal and information processing association annual summit and conference (pp. 5412–5416). Paris, France.

  • Smolic, A., Mueller, K., Merkle, P., Rein, T., Kautmer, M., Eisert, P., et al. (2004). Free viewpoint video extraction, representation, coding, and rendering. In Proceedings of IEEE international conference on image processing (pp. 3287–3290). Singapore.

  • Smolic, A., & Kauff, P. (2005). Interactive 3-D video representation and coding technologies. Proceedings of the IEEE, 93(1), 98–110.

    Article  Google Scholar 

  • Solh, M., & AlRegib, G. (2010). Hierarchical hole-filling (HHF): Depth image based rendering without depth map filtering for 3D-TV. In Proceedings of IEEE international workshop on multimedia signal processing (pp. 87–92). Saint Malo, France.

  • Sanders, J., & Kandrot, E. (2010). CUDA by example: An introduction to general-purpose GPU programming. Boston: Addison-Wesley.

    Google Scholar 

  • Sunghwan, C., Bumsub, H., & Kwanghoon, S. (2011). Hole filling with random walks using occlusion constraints in view synthesis. In Proceedings of international conference on image processing (pp. 1965–1968). Brussels, Belgium.

  • SIMD. (2014). http://arstechnica.com/features/2000/03/simd/

  • Tam, W. J., Alain, G., Zhang, L., Martin, T., & Renaud, R. (2004). Smoothing depth maps for improved stereoscopic image quality. In Proceedings of SPIE three-dimensional TV, video, and display III (pp. 162–172). Bellingham, WA, USA.

  • Tanimoto, M. (2005). FTV (free viewpoint television) creating ray-based image engineering. In Proceedings of IEEE conference on image processing (pp. II-25-II-28). Genoa, Italy.

  • Tanimoto, M. (2006). FTV (free viewpoint television) for 3D scene reproduction and creation. In Proceedings of IEEE computer vision and pattern recognition workshops (pp. 172–172). New York, USA.

  • Tanimoto, M. (2006). Overview of free viewpoint television. Signal Processing: Image Communication, 21(6), 454–461.

    Google Scholar 

  • Tanimoto, M., Fujii, T., & Suzuki, K. View synthesis algorithm in view synthesis reference software (VSRS3.5): ISO/IEC JTC1/SC29/WG11 (2008).

  • Telea, A. (2004). An image inpainting technique based on the fast marching method. Journal of Graphics, GPU, and Game Tools, 9(1), 23–24.

    Article  Google Scholar 

  • Three-dimensional film. (2014). http://www.3dimensional.com/

  • Wang, Z. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.

    Article  Google Scholar 

  • Wang, L. H., Zhang, J., Yao, S. J., Li, D. X., & Zhang, M. (2011). GPU based implementation of 3DTV system. In Proceedings of international conference on image and graphics (pp. 847–851). Anhui, China.

  • Wu, G. L., Chen, C. Y., & Chien, S. Y. (2011). Algorithm and architecture design of image inpainting engine for video error concealment applications. IEEE Transactions on Circuits and Systems for video Technology, 21(6), 792–803.

    Article  Google Scholar 

  • Xi, M., Wang, L. H., Yang, Q. Q., Li, D. X., & Zhang, M. (2013). Depth-image-based rendering with spatial and temporal texture synthesis for 3DTV. EURASIP Journal on Image and Video Processing, 2013(9), 1–18.

    Google Scholar 

  • Yang, T. C., Kuo, P. H., Liu, B. D., & Yang, J. F. (2013). Depth image-based rendering with edge-oriented hole filling for multiview synthesis. In Proceedings of international conference on communications, circuits and systems (pp. 50–53). Chengdu, China.

  • Yao, S. J., Jin, P. F., Fu, H., Wang, L. H., Li, D. X., & Zhang, M. (2012). Depth image-based rendering with edge-oriented hole filling for multiview synthesis. In Proceedings of international conference on audio, language and image processing (pp. 621–626). Shanghai, China.

  • Zhang, L., & Tam, W. J. (2005). Stereoscopic image generation based on depth images for 3DTV. IEEE Transactions on Broadcasting, 51(2), 191–199.

    Article  Google Scholar 

  • Zhu, C., Zhao, Y., Yu, L., & Tanimoto, M. (Eds.). (2013). 3D-TV system with depth-image-based rendering: Architecture, techniques and challenges. USA: Springer.

  • Zhu, C., & Li, S. (2013). Multiple reference views for hole reduction in DIBR view synthesis. In Proceedings of IEEE international symposium on broadband multimedia systems and broadcasting (pp. 1–5). Beijing, China.

  • Zhu, C., & Li, S. (2014). A new perspective on hole generation and filling in DIBR based view synthesis. In Proceedings of international conference on intelligent information hiding and multimedia signal processing (pp. 607–610). Beijing, China.

  • Zhu, C., & Li, S. (2015). Depth image based view synthesis: New insights and perspectives on hole generation and filling. IEEE Transactions on Broadcasting (in press).

Download references

Acknowledgments

This work was in part supported by the Ministry of Science and Technology under grant NSC 102-2219-E-006-005, the Ministry of Education, and the Ministry of Economic Affairs of Taiwan, under grant 103-EC-17-A-02-S1-201.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pin-Chen Kuo.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kuo, PC., Lin, JM., Liu, BD. et al. High efficiency depth image-based rendering with simplified inpainting-based hole filling. Multidim Syst Sign Process 27, 623–645 (2016). https://doi.org/10.1007/s11045-015-0378-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11045-015-0378-8

Keywords

Navigation