Abstract
Navigating in a complex world is challenging in that the rich, real environment provides a very large number of sensory states that can immediately precede a collision. Biological organisms such as rodents are able to solve this problem, effortlessly navigating in closed spaces by encoding in neural representations distance toward walls or obstacles for a given direction. This paper presents a method that can be used by virtual (simulated) or robotic agents, which uses states similar to neural representations to learn collision avoidance. Unlike other approaches, our reinforcement learning approach uses a small number of states defined by discretized distances along three constant directions. These distances are estimated either from optic flow or binocular stereo information. Parameterized templates for optic flow or disparity information are compared against the input flow or input disparity to estimate these distances. Simulations in a virtual environment show learning of collision avoidance. Our results show that learning with only stereo information is superior to learning with only optic flow information. Our work motivates the usage of abstract state descriptions for the learning of visual navigation. Future work will focus on the fusion of optic flow and stereo information, and transferring these models to robotic platforms.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Adiv G (1985) Determining three-dimensional motion and structure from optical flow generated by several moving objects. IEEE Trans Pattern Anal Mach Intell PAMI–7(4):384–401
Adorni G, Cagnoni S, Enderle S, Kraetzschmar GK, Mordonini M, Plagge M, Ritter M, Sablatng S, Zell A (2001) Vision-based localization for mobile robots. Robot Auton Syst 36:103–119
Baker S, Scharstein D, Lewis JP, Roth S, Black MJ, Szeliski R (2011) A database and evaluation methodology for optical flow. Int J Comput Vis 92(1):1–31
Baird LC (1995) Residual algorithms: Reinforcement learning with function approximation. In: Prieditis A, Russell S (eds) Proceedings of the twelfth international conference on machine learning. Morgan Kaufmann, San Francisco, pp 30–37
Barron J, Fleet D, Beauchemin S (1994) Performance of optical flow techniques. Int J Comput Vis 12(1):43–77
Bonin-Font F, Ortiz A, Oliver G (2008) Visual navigation for mobile robots: a survey. J Intell Robot Syst 53:263–296
Cumming BG, DeAngelis GC (2001) The physiology of stereopsis. Annu Rev Neurosci 24:203–238
DeSouza GN, Kak AC (2002) Vision for mobile robot navigation: a survey. IEEE Trans Pattern Anal Mach Intell 24(2):237–269
Dev A, Krose B, Groen F (1997) Navigation of mobile robot on the temporal development of the optic flow. Proc Intell Robots Syst (IROS) 2:558–563
Duda RO, Hart PE (1972) Use of the Hough transformation to detect lines and curves in pictures. Commun ACM 15(1):11–15
Franz MO, Schlkopf B, Mallot HA, Blthoff H (1998) Learning view graphs for robot navigation. Auton Robots 5:111–125
Gaskett C, Fletcher L, Zelinsky (2000) A Reinforcement learning for a vision based mobile robot. In: Proceedings of the IEEE conference on intelligent robots and systems (IROS), pp 403–409
Huang B-Q, Cao G-Y Guo M (2005) Reinforcement learning neural network to the problem of autonomous mobile robot obstacle avoidance. In: Proceedings of the 4th international conference on machine learning and cybernetics, pp 85–89
Kim D, Sun J, Oh SM, Rehg JM, Bobick AF (2006) Traversibility classification using unsupervised on-line visual learning for outdoor robot navigation. In: Proceedings of IEEE international conference on robotics and automation, Orlando, Florida, pp 518–525
Lemaire T, Berger C, Jung I-K, Lacroix S (2007) Vison-based SLAM: stereo and monocular approaches. Int J Comput Vis 74(3):343–364
Longuet-Higgins HC, Prazdny K (1980) The interpretation of a moving retinal image. Proc R Soc Lond Ser B Biol Sci 208:385–397
Lever C, Burton S, Jeewajee A, O’Keefe J, Burgess N (2009) Boundary vector cells in the subiculum of the hippocampal formation. J Neurosci 29(31):9771–9777
Marinez-Marin T, Duckett T (2005) Fast reinforcement learning for vision-guided mobile robots. In: Proceedings of the IEEE conference on robotics and automation (IROS), Spain, pp 4170–4175
Michels J, Saxena A, Ng AY (2005) High speed obstacle avoidance using monocular vision and reinforcement learning. In: Proceedings of 22nd international conference on machine learning, Bonn, Germany, pp 593–600
Millan JR (1995) Reinforcement learning of goal-directed obstacle-avoiding reaction strategies in an autonomous mobile robot. Robot Auton Syst 15:275–299
Prescott TJ, Mayhew JEW (1992) Obstacle avoidance through reinforcement learning. In: Moody JE, Hanson SJ, Lippman RP (eds) Advances in neural information processing systems 4. Morgan Kaufmann, San Mateo, pp 523–530
Ohya A, Kosaka A, Kak A (1998) Vision-based navigation by a mobile robot with obstacle avoidance using single-camera vision and ultrasonic sensing. Robot Autom 14(6):969–978
Pack CC, Born RT (2008) Cortical mechanisms for the integration of visual motion. In: Masland RH, Albright T (eds) The senses: a comprehensive reference, vol 2, pp 189–218
Perrone J (1992) Model for the computation of self-motion in biological systems. J Opt Soc Am A 9(2):177–192
Perrone J, Stone L (1994) A model of self-motion estimation within primate extrastriate visual cortex. Vis Res 34(21):2917–2938
Santos-Victor J, Sandini G, Curotto F, Garibaldi S (1993) Divergence stereo for robot navigation: Learning from bees. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 434–439
Scharstein D, Szeliski R (2002) A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int J Comput Vis 47(1/2/3):7–42
Shirley P, Marschner S (2009) Fundamentals of computer graphics, 3rd edn. A K Peters Natick, Massachusetts
Solstad T, Boccara CN, Kropff E, Moser M-B, Moser EI (2008) Representation of geometric borders in the entorhinal cortex. Science 332:1865–1868
Strsslin T, Sheynikhovich D, Chavarriaga R, Gerstner W (2003) Robust self-localization and navigation based on hippocampal place cells. Neural Netw 18:1125–1140
Sutton RS (1996) Generalization in reinforcement learning: successful examples using sparse coarse coding. In: Touretzky DS, Mozer MC, Hasselmo ME (eds) Advances in neural information processing systems: proceedings of the 1995 conference. MIT Press, Cambridge, pp 1038–1044
Sutton RS, Barto AG (1998) Reinforcement Learning—an Introduction. MIT Press, Cambridge
Watkins CJCH, Dayan P (1992) Q-learning. Mach Learn 8:279–292
Waxman A, Duncan JH (1986) Binocular image flows: steps toward stereo-motion fusion. IEEE Trans Pattern Recognit Mach Intell PAMI–8(6):715–729
Yue S, Rind C, Keil M, Cuadri J, Stafford R (2006) A bio-inspired visual collision detection mechanism for cars: optimisation of a model of a locust neuron to a novel environment. Neurocomputing 69:1591–1598
Zhu W, Levinson S (2001) Vision-based reinforcement learning for robot navigation. In: Proceedings of international joint conference on neural networks, Washington DC, pp 1025–1030
Acknowledgments
All authors are supported in part by CELEST, an NSF Science of Learning Center (SMA-0835976). FR acknowledges support from the Office of Naval Research (ONR N00014-11-1-0535 and ONR MURI N00014-10-1-0936). MV acknowledges support from the National Aeronautics and Space Administration (NASA NNX12AH31G).
Author information
Authors and Affiliations
Corresponding author
About this article
Cite this article
Raudies, F., Eldridge, S., Joshi, A. et al. Learning to navigate in a virtual world using optic flow and stereo disparity signals. Artif Life Robotics 19, 157–169 (2014). https://doi.org/10.1007/s10015-014-0153-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10015-014-0153-1