Abstract
Within crime scene analysis, a framework providing interactive visualization and gesture based manipulation of virtual objects, while still seeing the real environment, seems a useful approach for the interpretation of cues and for instructional purposes as well. This paper presents a framework providing a collection of techniques to enhance reliability, accuracy and overall effectiveness of gesture-based interaction, applied to an interactive interpretation and evaluation of a crime scene in an augmented reality environment. The interface layout is visualized via a stereoscopic see-through capable Head Mounted Display (HMD), projecting graphics in the central region of the user’s field of view, floating in a close-at-hand volume. The interaction paradigm concurrently exploits both hands to perform precise manipulation of 3D models of objects, eventually present on the crime scene, or even distance/angular measurements, allowing to formulate visual hypothesis with the lowest interaction effort. A real-time adaptation of interaction to the user’s needs is performed by monitoring hands and fingers’ dynamics, in order to allow both complex actions (like the above mentioned manipulation or measurement) and conventional keyboard-like operations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Gibson, S., Howard, T.: Interactive reconstruction of virtual environments from photographs, with application to scene-of-crime analysis. In: ACM symposium on Virtual reality software and technology, Seoul, Korea, pp. 41–48 (2000)
Jaimes, A., Sebe, N.: Multimodal human-computer interaction: A survey. Computer Vision and Image Understanding 108, 116–134 (2007)
Oviatt, S.: Multimodal Interfaces. In: Jacko, J., Sears, A. (eds.) The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications, Lawrence Erlbaum, New Jersey (2003)
Reeves, L.M., Lai, J., Larson, J.A., Oviatt, S., Balaji, T.S., Buisine, S., Collings, P., Cohen, P., Kraal, B., Martin, J.-C., McTear, M., Raman, T.V., Stanney, K.M., Su, H., Ying Wang, Q.: Guidelines for Multimodal User Interface Design. Communications of the ACM 47(1), 57–59 (2004)
Brooks, F.: Grasping reality through illusion - Interactive graphics serving science. In: ACM CHI 1988, pp. 1–11 (1988)
Krapichler, C., Haubner, M., Lösch, A., Englmeier, K.: Human-Machine Interface for Medical Image Analysis and Visualization in Virtual Environments. In: IEEE conference on Acoustics, Speech and Signal Processing, ICASSP 1997, vol. 4, pp. 21–24 (1997)
Kohler, M., Schroter, S.: A Survey of Video-based Gesture Recognition - Stereo and Mono Systems. Technical Report 693, Informatik VII. Univ. of Dortmund (1998)
Graetzel, C., Fong, T., Grange, S., Baur, C.: A Non-Contact Mouse for Surgeon-Computer Interaction. Technology and Health Care 12(3), 245–257 (2004)
Wachs, J., Stern, H., Edan, Y., Gillam, M., Feied, C., Smith, M., Handler, J.: Gestix: A Doctor-Computer Sterile Gesture Interface for Dynamic Environments. In: Saad, A., et al. (eds.) Soft Computing in Industrial Applications. ASC 39, pp. 30–39. Springer (2007)
Dionisio, J.D.N., Bui, A.A.T., Ying, R., Morioka, C., Kangarloo, H.: A Gesture-Driven User Interface for Medical Image Viewing. Radiological Society of North America (RSNA), InfoRad Exhibit; Radiology, 807 (2003)
Wachs, J., Stern, H., Edan, Y., Gillan, M., Feied, C., Smith, M., Handler, J.: A Real-Time Hand Gesture Interface for Medical Visualization Applications. In: 10th Online World Conference on Soft Computing (2005)
Tani, B.S., Maia, R.S., von Wangenheim, A.: A Gesture Interface for Radiological Workstations. In: Twentieth IEEE International Symposium on Computer-Based Medical Systems CBMS 2007, pp. 27–32 (2007)
Duke, D.J.: Reasoning About Gestural Interaction. In: ACM/Eurographics 1995, vol. 14(3), pp. 55–66 (1995)
Dix, A., Finlay, J., Abowd, G., Beale, R.: Human-Computer Interaction, 3rd edn. Prentice Hall (2004)
Stern, H., Wachs, J., Edan, Y.: Optimal Hand Gesture Vocabulary Design Using Psycho-Physiological and Technical Factors. In: 7th International Conference on Automatic Face and Gesture Recognition, FG 2006, pp. 257–262 (2006)
Alur, R., Dill, D.L.: A theory of timed automata. Journal of Theoretical Computer Science 126(2), 183–235 (1994)
Alur, R., Courcoubetis, C., Dill, D.L.: Model checking for real-time systems. In: 5th Annual Symposium on Logic in Computer Science, pp. 414–425. IEEE Computer Society Press, New York (1990)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Abate, A.F., De Marsico, M., Levialdi, S., Mastronardi, V., Ricciardi, S., Tortora, G. (2008). Gesture Based Interface for Crime Scene Analysis: A Proposal. In: Gervasi, O., Murgante, B., Laganà, A., Taniar, D., Mun, Y., Gavrilova, M.L. (eds) Computational Science and Its Applications – ICCSA 2008. ICCSA 2008. Lecture Notes in Computer Science, vol 5073. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69848-7_13
Download citation
DOI: https://doi.org/10.1007/978-3-540-69848-7_13
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-69840-1
Online ISBN: 978-3-540-69848-7
eBook Packages: Computer ScienceComputer Science (R0)