iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/s11265-013-0858-8
Image Blending in a High Frame Rate FPGA-based Multi-Camera System | Journal of Signal Processing Systems Skip to main content
Log in

Image Blending in a High Frame Rate FPGA-based Multi-Camera System

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

Panoptic is a custom spherical light field camera used as a polydioptric system where imagers are distributed over a hemispherical surface, each having its own vision of the surroundings and a distinct focal plane. The spherical light field camera records light information from any direction around its center. This paper revises previously developed Nearest Neighbor and Linear blending techniques. Novel Gaussian blending and Restricted Gaussian blending techniques for vision reconstruction of a virtual observer located inside the spherical geometry are presented. These new blending techniques improve the output quality of the reconstructed image with respect to the ordinary stitching techniques and simpler image blending algorithms. A comparison of the developed blending algorithms is also given in this paper. A hardware architecture based on Field Programmable Gate Arrays (FPGA) enabling the real-time implementation of the blending algorithms is presented, along with the imaging results and resource utilization comparison. A recorded omnidirectional video is attached as a supplementary material.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
Figure 9
Figure 10
Figure 11
Figure 12

Similar content being viewed by others

References

  1. Szeliski, R. (1994). Image mosaicing for tele-reality applications. Proceedings of the second IEEE workshop on applications of computer vision (pp. 44–53).

  2. Mann, S., & Picard, R.W. (1995). On being ‘undigital’ with digital cameras: extending dynamic range by combining differently exposed pictures. Proceedings of IS&T (pp. 442–448).

  3. Debevec, P. E., & Malik, J. (1997). Recovering high dynamic range radiance maps from photographs. Proceedings of the 24th conference on computer graphics and interactive techniques (pp. 369–378).

  4. Rander, P., Narayanan, P. J., Kanade, T. (1997). Virtualized reality: constructing time-varying virtual worlds from real world events. Proceedings of IEEE visualization ’97 (pp. 277–284).

  5. Levoy, M., & Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on computer graphics and interactive techniques (pp. 31–42). doi:10.1145/237170.237199.

  6. Schechner, Y., & Nayar, S. (2001). Generalized mosaicing. Proceedings of IEEE international conference on computer vision (pp. 17–24).

  7. Sarachik, K. B. (1989). Characterising an indoor environment with a mobile robot and uncalibrated stereo. Proceedings of IEEE international conference on robotics and automation (pp. 984–989). doi:10.1109/ROBOT.1989.100109.

  8. Shum, H. Y., & He, L. W. (1999). Rendering with concentric mosaics. Proceedings of the 26th annual conference on Computer graphics and interactive techniques, SIGGRAPH ’99 (pp. 299–306). New York: ACM Press/Addison-Wesley Publishing Co. doi:10.1145/311535.311573.

  9. Taylor, D. (1996). Virtual camera movement: the way of the future?American Cinematographer, 77(8), 93–100.

    Google Scholar 

  10. Nayar, S. K., & Peri, V. (1999). Folded catadioptric cameras. Proceedings of IEEE computer society conference on computer vision and pattern recognition (pp. 217–223).

  11. Zhang, C., & Chen, T. (2004). A self-reconfigurable camera array. Eurographics symposium on rendering (pp. 243–254).

  12. Yang, J. C., Everett, M., Buehler, C., McMillan, L. (2002). A real-time distributed light field camera. Proceedings of the 13th Eurographics workshop on rendering (pp. 77–86).

  13. Tang, W. K., Wong, T. T., Heng, P. A. (2005). A system for real-time panorama generation and display in tele-immersive applications. IEEE Transactions on Multimedia, 7(2), 280–292.

    Article  Google Scholar 

  14. Ladybug, Pointgrey. http://www.ptgrey.com/products/spherical.asp. Accessed on 16 Sep 2013.

  15. Wilburn, B., Joshi, N., Vaish, V., et al. (2005). High performance imaging using large camera arrays. ACM Transactions on Graphics, 24, 765–776. doi:10.1145/1073204.1073259.

    Article  Google Scholar 

  16. Brady, D. J., Gehm, M. E., Stack, R. A., Marks, D. L., Kittle, D. S., Golish, D. R., Vera, E. M., Feller, S.D. (2012). Multiscale gigapixel photography. Nature, 486(7403), 386–389.

    Article  Google Scholar 

  17. Cossairt, O. S., Miau, D., Nayar, S. K. (2011). Gigapixel computational imaging. Proceedings of IEEE international conference on computational photography (pp. 1–8).

  18. Yagi, Y. (1999). Omni directional sensing and its applications. IEICE Transactions on Information and Systems, E82-D(3), 568–579.

    Google Scholar 

  19. Afshari, H., Akin, A., Popovic, V., Schmid, A., Leblebici, Y. (2012). Real-time FPGA implementation of linear blending vision reconstruction algorithm using a spherical light field camera. IEEE workshop on signal processing systems (pp. 49–54). doi:10.1109/SiPS.2012.49.

  20. Popovic, V., Afshari, H., Schmid, A., Leblebici, Y. (2013). Real-time implementation of Gaussian image blending in a spherical light field camera. Proceeding of IEEE international conference on industrial technology (pp. 1173–1178).

  21. Gortler, S. J., Grzeszczuk, R., Szeliski, R., Cohen, M. F. (1996). The Lumigraph. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, SIGGRAPH ’96 (pp. 43–54). New York: ACM. doi:10.1145/237170.237200.

  22. Szeliski, R., & Shum, H. Y. (1997). Creating full view panoramic image mosaics and environment maps. In Proceedings of the conference on computer graphics and interactive techniques, SIGGRAPH ’97 (pp. 251–258). New York: ACM. doi:10.1145/258734.258861.

  23. Peleg, S., & Herman, J. (1997). Panoramic mosaics by manifold projection. Proceedings of IEEE computer society conference on computer vision and pattern recognition (pp. 338–343). doi:10.1109/CVPR.1997.609346.

  24. Kang, S. B., & Weiss, R. S. (2000). Can we calibrate a camera using an image of a flat, textureless lambertian surface?. Proceedings of the 6th European conference on computer vision—Part II (pp. 640–653).

  25. Bouguet, J. (2010). Camera calibration toolbox for Matlab. http://www.vision.caltech.edu/bouguetj/calib_doc. Accessed on 7 Dec 2011.

  26. Afshari, H., Jacques, L., Bagnato, L., et al. (2013). The PANOPTIC camera: A plenoptic sensor with real-time omnidirectional capability. Journal of Signal Processing Systems, 70(3), 305–328.

    Article  Google Scholar 

  27. Hartley, R. I., & Zisserman, A. (2004). Multiple view geometry in computer vision, 2nd. Cambridge University Press.

  28. Brown, M., & Lowe, D. (2007). Automatic panoramic image stitching using invariant features. International Journal of Computer Vision, 74(1), 59–73.

    Article  Google Scholar 

  29. Burt, P. J., & Adelson, E. H. (1983). A multiresolution spline with application to image mosaics. ACM Transactions on Graphics, 2(4), 217–236. doi:10.1145/245.247.

    Article  Google Scholar 

Download references

Acknowledgments

The authors thank S. Hauser, P. Bruehlmeier and E. Erdede for their contribution. The authors gratefully acknowledge the support of Xilinx, Inc., through the Xilinx University Program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vladan Popovic.

Additional information

This research has been partly conducted with the support of the Swiss NSF under grant number 200021-125651 and Science and Technology Division of the Swiss Federal Competence Center (armasuisse).

Electronic supplementary material

Below is the link to the electronic supplementary material.

(24.4 MPG)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Popovic, V., Seyid, K., Akin, A. et al. Image Blending in a High Frame Rate FPGA-based Multi-Camera System. J Sign Process Syst 76, 169–184 (2014). https://doi.org/10.1007/s11265-013-0858-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-013-0858-8

Keywords

Navigation