iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://unpaywall.org/10.1007/S11042-013-1816-Y
Cell-based visual surveillance with active cameras for 3D human gaze computation | Multimedia Tools and Applications Skip to main content
Log in

Cell-based visual surveillance with active cameras for 3D human gaze computation

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

Capturing fine resolution and well-calibrated video images with good object visual coverage in a wide space is a tough task for visual surveillance. Although the use of active cameras is an emerging method, it suffers from the problems of online camera calibration difficulty, mechanical delay handling, image blurring from motions, and algorithm un-friendly due to dynamic backgrounds, etc. This paper proposes a cell-based visual surveillance system by using N (N ≥ 2) active cameras. We propose the camera scan speed map (CSSM) to deal with the practical mechanical delay problem for active camera system design. We formulate the three mutually-coupled problems of camera layout, surveillance space partition with cell sequence, and camera parameter control, into an optimization problem by maximizing the object resolution while meeting various constraints such as system mechanical delay, full visual coverage, minimum object resolution, etc. The optimization problem is solved by using a full searching approach. The cell-based calibration method is proposed to compute both the intrinsic and exterior parameters of active cameras for different cells. With the proposed system, the foreground object is detected based on motion and appearance features and tracked by dynamically switching the two groups of cameras across different cells. The proposed algorithms and system have been validated by an in-door surveillance experiment, where the surveillance space was partitioned into four cells. We used two active cameras with one camera in one group. The active cameras were configured with the optimized pan, tilt, and zooming parameters for different cells. Each camera was calibrated with the cell-based calibration method for each configured pan, tilt, and zooming parameters. The algorithms and system were applied to monitor freely moving peoples within the space. The system can capture good resolution, well-calibrated, and good visual coverage video images with static background in support of automatic object detection and tracking. The proposed system performed better than traditional single or multiple fixed camera system in term of image resolution, surveillance space, etc. We further demonstrated that advanced 3D features, such as 3D gazes, were successfully computed from the captured good-quality images for intelligent surveillance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

References

  1. Bellotto N, Benfold B, Harland H, Nagel HH, Pirlo N, Reid I, Sommerlade E, Zhao C (2013) Cognitive visual tracking and camera control. Comput Vis Image Underst 116(3):457–471

    Article  Google Scholar 

  2. Bradski GR (1998) Computer vision face tracking for use in a perceptual user interface, Intel Technology Journal, Q2

  3. Chen KW, Lin CW, Chiu TH, Chen YY, Hung YP (2011) Multi-resolution design for large-scale and high-resolution monitoring. IEEE T Multimed 13(6):1256–1268

    Article  Google Scholar 

  4. De D, Ray S, Konar A, Chatterjee A (2005) An evolutionary SPDE breeding−based hybrid particle swarm optimizer: application in coordination of robot ants for camera coverage area optimization, PReMI, pp 413–416

  5. Erdem U, Sclaroff S (2006) Automated camera layout to satisfy task-specific and floor plan-specific coverage requirements. Comput Vis Image Underst 103(3):156–169

    Article  Google Scholar 

  6. Hartley R, Zisserman A (2004) Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  7. Hu W, Tan T, Wang L, Maybank S (2004) A survey on visual surveillance of object motion and behaviors. IEEE T SMC C 34(3):334–352

    Google Scholar 

  8. Loy C, Xiang T, Gong S (2011) Detecting and discriminating behavioral anomalies. Pattern Recogn 44(1):117–132

    Article  MATH  Google Scholar 

  9. Matsuyama T, Ukita N (2002) Real-time multi-target tracking by a cooperative distributed vision system, Proceedings of the IEEE, 90(7):1136–1150

  10. Morris BT, Trivedi MM (2008) A survey of vision-based trajectory learning and analysis for surveillance, IEEE Transactions on Circuits and Systems for Video Technology, 18(8):1114–1127

    Google Scholar 

  11. Saini M, Atrey PK, Mehrotra S, Kankanhalli M (2012) W3-privacy: understanding what, when, and where inference channels in multi-camera surveillance video, Multimedia Tools and Applications

  12. Sankaranarayanan K, Davis W (2011) Object association across PTZ cameras using logistic MIL, IEEE Conf. CVPR, pp. 3433–3440

  13. Shi Q, Nobuhara S, Matsuyama T (2012) 3D face reconstruction and gaze estimation from multi-view video using symmetry prior. IPSJ T Comput Vis Appl 4:149–160

    Google Scholar 

  14. Stauffer C, Grimson WEL (1999) Adaptive background mixture models for real-time tracking, IEEE Conf. CVPR, pp.246–252

  15. Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features, IEEE Conf. CVPR, pp.511–518

  16. Wang XG (2013) Intelligent multi-camera video surveillance: a review. Pattern Recogn Lett 34(1):3–19

    Article  Google Scholar 

  17. Yamaguchi T, Yoshimoto H, Matsuyama T (2010) Cell-based 3D video capture method with active cameras, in Image and Geometry Proc. for 3-D Cinematography, pp. 171–191, Springer

  18. Yamaguchi T, Yoshimoto H, Nobuhara S, Matsuyama T (2010) Cell-based 3D video capture of a freely-moving object using multi-viewpoint active cameras. IPSJ T Comput Vis Appl 2(8):169–184

    Google Scholar 

  19. Yonetani R, Kawashima H, Hirayama T, Matsuyama T (2010) Gaze probing: event-based estimation of objects being focused on. ICPR, pp 101–104

  20. Zhang Z (2000) A flexible new technique for camera calibration. IEEE T Pattern Anal Mach Intell 22(11):1330–1334

    Article  Google Scholar 

  21. Zheng WS, Gong SG, Xiang T (2011) Person re-identification by probabilistic relative distance comparison. CVPR, pp 649–656

Download references

Acknowledgments

The work presented in this paper was sponsored by grants from National Natural Science Foundation of China (NSFC) (No. 51208168), Tianjin Natural Science Foundation (No. 13JCYBJC37700), the Youth Top-Notch Talent Plan of Hebei Province, China, and the Grant-in-Aid for Scientific Research Program (No. 10049) from the Japan Society for the Promotion of Science (JSPS).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhaozheng Hu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hu, Z., Matsuyama, T. & Nobuhara, S. Cell-based visual surveillance with active cameras for 3D human gaze computation. Multimed Tools Appl 74, 4161–4185 (2015). https://doi.org/10.1007/s11042-013-1816-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-013-1816-y

Keywords

Navigation