Abstract
Panoptic is a custom spherical light field camera used as a polydioptric system where imagers are distributed over a hemispherical surface, each having its own vision of the surroundings and a distinct focal plane. The spherical light field camera records light information from any direction around its center. This paper revises previously developed Nearest Neighbor and Linear blending techniques. Novel Gaussian blending and Restricted Gaussian blending techniques for vision reconstruction of a virtual observer located inside the spherical geometry are presented. These new blending techniques improve the output quality of the reconstructed image with respect to the ordinary stitching techniques and simpler image blending algorithms. A comparison of the developed blending algorithms is also given in this paper. A hardware architecture based on Field Programmable Gate Arrays (FPGA) enabling the real-time implementation of the blending algorithms is presented, along with the imaging results and resource utilization comparison. A recorded omnidirectional video is attached as a supplementary material.
Similar content being viewed by others
References
Szeliski, R. (1994). Image mosaicing for tele-reality applications. Proceedings of the second IEEE workshop on applications of computer vision (pp. 44–53).
Mann, S., & Picard, R.W. (1995). On being ‘undigital’ with digital cameras: extending dynamic range by combining differently exposed pictures. Proceedings of IS&T (pp. 442–448).
Debevec, P. E., & Malik, J. (1997). Recovering high dynamic range radiance maps from photographs. Proceedings of the 24th conference on computer graphics and interactive techniques (pp. 369–378).
Rander, P., Narayanan, P. J., Kanade, T. (1997). Virtualized reality: constructing time-varying virtual worlds from real world events. Proceedings of IEEE visualization ’97 (pp. 277–284).
Levoy, M., & Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on computer graphics and interactive techniques (pp. 31–42). doi:10.1145/237170.237199.
Schechner, Y., & Nayar, S. (2001). Generalized mosaicing. Proceedings of IEEE international conference on computer vision (pp. 17–24).
Sarachik, K. B. (1989). Characterising an indoor environment with a mobile robot and uncalibrated stereo. Proceedings of IEEE international conference on robotics and automation (pp. 984–989). doi:10.1109/ROBOT.1989.100109.
Shum, H. Y., & He, L. W. (1999). Rendering with concentric mosaics. Proceedings of the 26th annual conference on Computer graphics and interactive techniques, SIGGRAPH ’99 (pp. 299–306). New York: ACM Press/Addison-Wesley Publishing Co. doi:10.1145/311535.311573.
Taylor, D. (1996). Virtual camera movement: the way of the future?American Cinematographer, 77(8), 93–100.
Nayar, S. K., & Peri, V. (1999). Folded catadioptric cameras. Proceedings of IEEE computer society conference on computer vision and pattern recognition (pp. 217–223).
Zhang, C., & Chen, T. (2004). A self-reconfigurable camera array. Eurographics symposium on rendering (pp. 243–254).
Yang, J. C., Everett, M., Buehler, C., McMillan, L. (2002). A real-time distributed light field camera. Proceedings of the 13th Eurographics workshop on rendering (pp. 77–86).
Tang, W. K., Wong, T. T., Heng, P. A. (2005). A system for real-time panorama generation and display in tele-immersive applications. IEEE Transactions on Multimedia, 7(2), 280–292.
Ladybug, Pointgrey. http://www.ptgrey.com/products/spherical.asp. Accessed on 16 Sep 2013.
Wilburn, B., Joshi, N., Vaish, V., et al. (2005). High performance imaging using large camera arrays. ACM Transactions on Graphics, 24, 765–776. doi:10.1145/1073204.1073259.
Brady, D. J., Gehm, M. E., Stack, R. A., Marks, D. L., Kittle, D. S., Golish, D. R., Vera, E. M., Feller, S.D. (2012). Multiscale gigapixel photography. Nature, 486(7403), 386–389.
Cossairt, O. S., Miau, D., Nayar, S. K. (2011). Gigapixel computational imaging. Proceedings of IEEE international conference on computational photography (pp. 1–8).
Yagi, Y. (1999). Omni directional sensing and its applications. IEICE Transactions on Information and Systems, E82-D(3), 568–579.
Afshari, H., Akin, A., Popovic, V., Schmid, A., Leblebici, Y. (2012). Real-time FPGA implementation of linear blending vision reconstruction algorithm using a spherical light field camera. IEEE workshop on signal processing systems (pp. 49–54). doi:10.1109/SiPS.2012.49.
Popovic, V., Afshari, H., Schmid, A., Leblebici, Y. (2013). Real-time implementation of Gaussian image blending in a spherical light field camera. Proceeding of IEEE international conference on industrial technology (pp. 1173–1178).
Gortler, S. J., Grzeszczuk, R., Szeliski, R., Cohen, M. F. (1996). The Lumigraph. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, SIGGRAPH ’96 (pp. 43–54). New York: ACM. doi:10.1145/237170.237200.
Szeliski, R., & Shum, H. Y. (1997). Creating full view panoramic image mosaics and environment maps. In Proceedings of the conference on computer graphics and interactive techniques, SIGGRAPH ’97 (pp. 251–258). New York: ACM. doi:10.1145/258734.258861.
Peleg, S., & Herman, J. (1997). Panoramic mosaics by manifold projection. Proceedings of IEEE computer society conference on computer vision and pattern recognition (pp. 338–343). doi:10.1109/CVPR.1997.609346.
Kang, S. B., & Weiss, R. S. (2000). Can we calibrate a camera using an image of a flat, textureless lambertian surface?. Proceedings of the 6th European conference on computer vision—Part II (pp. 640–653).
Bouguet, J. (2010). Camera calibration toolbox for Matlab. http://www.vision.caltech.edu/bouguetj/calib_doc. Accessed on 7 Dec 2011.
Afshari, H., Jacques, L., Bagnato, L., et al. (2013). The PANOPTIC camera: A plenoptic sensor with real-time omnidirectional capability. Journal of Signal Processing Systems, 70(3), 305–328.
Hartley, R. I., & Zisserman, A. (2004). Multiple view geometry in computer vision, 2nd. Cambridge University Press.
Brown, M., & Lowe, D. (2007). Automatic panoramic image stitching using invariant features. International Journal of Computer Vision, 74(1), 59–73.
Burt, P. J., & Adelson, E. H. (1983). A multiresolution spline with application to image mosaics. ACM Transactions on Graphics, 2(4), 217–236. doi:10.1145/245.247.
Acknowledgments
The authors thank S. Hauser, P. Bruehlmeier and E. Erdede for their contribution. The authors gratefully acknowledge the support of Xilinx, Inc., through the Xilinx University Program.
Author information
Authors and Affiliations
Corresponding author
Additional information
This research has been partly conducted with the support of the Swiss NSF under grant number 200021-125651 and Science and Technology Division of the Swiss Federal Competence Center (armasuisse).
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Popovic, V., Seyid, K., Akin, A. et al. Image Blending in a High Frame Rate FPGA-based Multi-Camera System. J Sign Process Syst 76, 169–184 (2014). https://doi.org/10.1007/s11265-013-0858-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11265-013-0858-8