PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Improving Self-Localization Efficiency in a Small Mobile Robot by Using a Hybrid Field of View Vision System

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
In this article a self-localization system for small mobile robots based on inexpensive cameras and unobtrusive, passive landmarks is presented and evaluated. The main contribution is the experimental evaluation of the hybrid field of view vision system for self-localization with artificial landmarks. The hybrid vision system consists of an omnidirectional, upward-looking camera with a mirror, and a typical, front-view camera. This configuration is inspired by the peripheral and foveal vision co-operation in animals. We demonstrate that the omnidirectional camera enables the robot to detect quickly landmark candidates and to track the already known landmarks in the environment. The front-view camera guided by the omnidirectional information enables precise measurements of the landmark position over extended distances. The passive landmarks are based on QR codes, which makes possible to easily include in the landmark pattern additional information relevant for navigation. We present evaluation of the positioning accuracy of the system mounted on a SanBot Mk II mobile robot. The experimental results demonstrate that the hybrid field of view vision system and the QR code landmarks enable the small mobile robot to navigate safely along extended paths in a typical home environment.
Twórcy
  • Poznań University of Technology, Institute of Control and Information Engineering, ul. Piotrowo 3A, 60-965 Poznań, Poland
  • Poznań University of Technology, Institute of Control and Information Engineering, ul. Piotrowo 3A, 60-965 Poznań, Poland
Bibliografia
  • [1] Adorni G., Bolognini L., Cagnoni S., Mordonini M., A Non-traditional Omnidirectional Vision System with Stereo Capabilities for Autonomous Robots, LNCS 2175, Springer, Berlin, 2001, 344–355. DOI: 10.1007/3-540-45411-X_36.
  • [2] Bazin J., Catadioptric Vision for Robotic Applications, PhD Dissertation, Korea Advanced Institute of Science and Technology, Daejeon, 2010.
  • [3] Baczyk R., Kasinski A., “Visual simultaneous localisation and map–building supported by structured landmarks”, Int. Journal of Applied Mathematics and Computer Science, vol. 20 no. 2, 2010, 281–293. DOI: 10.2478/amcs-2014-0043.
  • [4] Briggs A., Scharstein D., Braziunas D., Dima C., Wall P. , “Mobile Robot Navigation Using Self-Similar Landmarks”. In: Proc. IEEE Int. Conf. on Robotics and Automation, San Francisco, 2000, 1428-1434, DOI: 10.1109/ROBOT.2000.844798.
  • [5] Cagnoni S., Mordonini M., Mussi L., “Hybrid Stereo Sensor with Omnidirectional Vision Capabilities: Overview and Calibration Procedures“. In: Proc. Int. Conf. on Image Analysis and Processing, Modena, 2007, 99–104, DOI: 10.1109/ICIAP.2007.4362764.
  • [6] DeSouza G., A. C. Kak, “Vision for Mobile Robot Navigation: A Survey”, IEEE Trans. on Pattern Anal. and Machine Intell., vol. 24, no. 2, 2002, 237–267. DOI: 10.1109/34.982903.
  • [7] Durrant-Whyte H. F., Bailey T., “Simultaneous localization and mapping (Part I)”, IEEE Robotics & Automation Magazine, vol. 13, no. 2, 2006, 99–108. DOI: 10.1109/MRA.2006.1638022.
  • [8] Davison A., Reid I., Molton N., Stasse O., “Mono-SLAM: Real-time single camera SLAM”, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, 2007, 1052–1067. DOI: 10.1109/TPAMI.2007.1049.
  • [9] Fiala M., “Designing highly reliable fiducial markers”, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 32, no. 7, 2010, 1317–1324. DOI: 10.1109/TPAMI.2009.146.
  • [10] Figat J., Kasprzak W., “NAO-mark vs. QR-code Recognition by NAO Robot Vision”. In: Progress in Automation, Robotics and Measuring Techniques, vol. 2 Robotics, (R. Szewczyk et al., eds.), AISC 351, Springer, Heidelberg, 2015, 55–64. DOI 10.1007/978-3-319-15847-1_6.
  • [11] Lemaire T., Berger C., Jung I.-K., Lacroix S., “Vision- based SLAM: Stereo and monocular approaches”, Int. Journal of Computer Vision, vol. 74, no. 3, 2007, 343–364. DOI: 10.1007/s11263-007-0042-3.
  • [12] Lin G., Chen X., “A robot indoor position and orientation method based on 2D barcode landmark”, Journal of Computers, vol. 6, no. 6, 2011, 1191–1197. DOI:10.4304/jcp.6.6.1191-1197.
  • [13] Lu F., Tian G., Zhou F., Xue Y., Song B., “Building an Intelligent Home Space for Service Robot Based on Multi-Pattern Information Model and Wireless Sensor Networks”, Intelligent Control and Automation, vol. 3, no. 1, 2012, 90–97. DOI: 10.4236/ica.2012.31011.
  • [14] McCann E., Medvedev M., Brooks D., Saenko K., “Off the Grid: Self-Contained Landmarks for Improved Indoor Probabilistic Localization“. In: Proc. IEEE Int. Conf. on Technologies for Practical Robot Applications, Woburn, 2013, 1–6. DOI: 0.1109/TePRA.2013.6556349.
  • [15] Martínez-Gomez J., Fernańdez-Caballero A., Garciá-Varea I. , Rodriǵuez L., Romero-Gonzalez C.,“A Taxonomy of Vision Systems for Ground Mobile Robots”, Int. Journal of Advanced Robotic Systems, vol. 11, 2014. DOI: 10.5772/58900.
  • [16] Menegatti E., Pagello E., “Cooperation between Omnidirectional Vision Agents and Perspective Vision Agents for Mobile Robots“, Intelligent Autonomous Systems 7 (M. Gini et al., eds.), IOS Press, Amsterdam, 2002, 231–135, 2002.
  • [17] Potúcek I., Omni-directional image processing for human detection and tracking, PhD Dissertation, Brno University of Technology, Brno, 2006.
  • [18] Rahim N., Ayob M., Ismail A., Jamil S., “A comprehensive study of using 2D barcode for multi robot labelling and communication”, Int. Journal on Advanced Science Engineering Information Technology, vol. 2, no. 1, 80–84, 1998.
  • [19] Rostkowska M., Topolski M., Skrzypczynski P., „A Modular Mobile Robot for Multi-Robot Applications”, Pomiary Automatyka Robotyka, vol. 17, no. 2, 2013, 288–293.
  • [20] Rostkowska M., Topolski M., „Usability of matrix barcodes for mobile robots positioning”, Postȩpy Robotyki, Prace Naukowe Politechniki Warszawskiej, Elektronika (K. Tchon, C. Zielinski, eds.), vol. 194, no. 2, 2014, 711–720. (in Polish)
  • [21] Rostkowska M., Topolski M., “On the Application of QR Codes for Robust Self-Localization of Mobile Robots in Various Application Scenarios”. In: Progress in Automation, Robotics and Measuring Techniques, (R. Szewczyk et al., eds.), AISC, Springer, Zürich, 2013, 243-252. DOI 10.1007/978-3-319-15847-1_24.
  • [22] Rusdinar A., Kim J., Lee J., Kim S., “Implementation of real-time positioning system using extended Kalman filter and artificial landmarks on ceiling, Journal of Mechanical Science and Technology”, vol. 26, no. 3, 2012, 949–958, DOI: 10.1007/s12206-011-1251-9.
  • [23] Scaramuzza D., Omnidirectional vision: from calibration to robot motion estimation, PhD Dissertation, ETH Zürich, 2008.
  • [24] Schmidt A., Kraft M., Fularz M., Domagala Z., “The comparison of point feature detectors and descriptors in the context of robot navigation”, Journal of Automation, Mobile Robotics & Intelligent Systems, vol. 7, no. 1, 2013, 11–20.
  • [25] Siagian C., Itti L., “Biologically Inspired Mobile Robot Vision Localization”, IEEE Trans. on Robotics, vol. 25, no. 4, 2009, 1552–3098, DOI: 10.1109/TRO.2009.2022424.
  • [26] Scharfenberger Ch. N., Panoramic Vision for Automotive Applications: From Image Rectification to Ambiance Monitoring and Driver Body Height Estimation, PhD Dissertation, Institute for Real-Time Computer Systems at the Munich University of Technology, Munich, 2010.
  • [27] Skrzypczynski P., “Uncertainty Models of the Vision Sensors in Mobile Robot Positioning”, Int. Journal of Applied Mathematics and Computer Science, vol. 15, no. 1, 2005, 73–88.
  • [28] Skrzypczynski P., “Simultaneous Localization and Mapping: A Feature-Based Probabilistic Approach”, Int. Journal of Applied Mathematics and Computer Science, vol. 19, no. 4, 2009, 575–588, DOI: 10.2478/v10006-009-0045-z.
  • [29] Yoon K-J., Kweon I-S., “Artificial Landmark Tracking Based on the Color Histogram“. In: Proc. IEEE/RSJ Conf. on Intelligent Robots and Systems, Maui, 2001, 1918-19203. DOI: 10.1109/IROS.2001.976354.
  • [30] EmguCV, http://www.emgu.com/wiki/index. php/Main
  • [31] OpenCV Documentation, http://docs.opencv.org
  • [32] MessagingToolkit, http://platform.twit88.com
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-586a5239-f167-4122-9dec-e288ded05fd0
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.