PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Mobile robot visual homing by vector pre-assigned mechanism

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
In this paper, we present an optimization mechanism for two popular landmark-based mobile robot visual homing algorithms (ALV and HiSS), called vector pre-assigned mechanism (VPM). VPM contains two branches, both of which can promote the homing performance effectively. In addition, to make the landmark distribution satisfy the equal distance assumption, a landmark optimization strategy is proposed based on imaging principle of the panoramic vision. Experiments on both panoramic image database and a real mobile robot have confirmed the effectiveness of the proposed methods.
Słowa kluczowe
Rocznik
Strony
213--227
Opis fizyczny
Bibliogr. 28 poz., rys., tab., wykr.
Twórcy
autor
  • College of Automation, Harbin Engineering University, 150001, China
autor
  • College of Automation, Harbin Engineering University, 150001, China
autor
  • College of Automation, Harbin Engineering University, 150001, China
autor
  • College of Automation, Harbin Engineering University, 150001, China
autor
  • College of Automation, Harbin Engineering University, 150001, China
Bibliografia
  • [1] J.L. Crassidis and F.L. Markley, “Three-axis attitude estimation using rate-integrating gyroscopes”, J. Guid. Control Dyn. 39(7), 1513‒1526 (2016).
  • [2] F. Penizzotto, E. Slawinski, and V. Mut, “Laser radar based autonomous mobile robot guidance system for groves navigation”, IEEE Latin Am. Trans. 13(5), 1303‒1312 (2015).
  • [3] W. Kowalczyk, M. Michałek, and K. Kozłowski, “Trajectory tracking control with obstacle avoidance capability for unicycle-like mobile robot”, Bull. Pol. Ac.: Tech. 60(3), 537‒546 (2012).
  • [4] M. Gupta, G.K. Arunkumar, and L. Vachhani, “Bearing only visual homing: Observer based approach”, in 25th Mediterranean Conf. Control Autom. (MED), pp. 358‒363 (2017).
  • [5] M. Liu, C. Pradalier, and R. Siegwart, “Visual homing from scale with an uncalibrated omnidirectional carema”, IEEE Trans. Robot. 29(6), 1353‒1365 (2013).
  • [6] J.O. Esparzajiménez, M. Devy, and J. L. Gordillo, “Visual EKFSLAM from Heterogeneous Landmarks”, Sensors, 16(4), 489 (2016).
  • [7] C. Gamallo, M. Mucientes, and C. V. Regueiro, “Omnidirectional visual SLAM under severe occlusions”, Robot. Auton. Syst. 65(C), 76‒87 (2015).
  • [8] E. Garcia-Fidalgo and A. Ortiz, “Vision-based topological mapping and localization methods: A survey”, Robot. Auton. Syst. 64(C), 1‒20 (2015).
  • [9] N. Paramesh and D.M. Lyons, “Homing with stereovision”, Robotica, 34(12), 2741‒2758 (2016).
  • [10] A. Sabnis, G.K. Arunkumar, V. Dwaracherla, and L. Vachhani, “Probabilistic approach for visual homing of a mobile robot in the presence of dynamic obstacles”, IEEE Trans. Ind. Electron. 63(9), 5523–5533 (2016).
  • [11] G.K. Arunkumar, A. Sabnis, and L. Vachhani, “Robust steering control for autonomous homing and its application in visual homing under practical conditions”, J. Intell. Robot. Syst. 89(3‒4), 403‒419 (2018).
  • [12] C. Lee, S.E. Yu, and D.E. Kim, “Landmark-based homing navigation using omnidirectional depth information”, Sensors, 17(8), 1928 (2017).
  • [13] Q. Zhu, X. Ji, J. Wang, and C. Cai, “A machine learning-based mobile robot visual homing approach”, Bull. Pol. Ac.: Tech. (to be published).
  • [14] R. Möller, M. Krzykawski, and L. Gerstmayr, “Three 2D-warping schemes for visual robot navigation”, Auton Robot. 29(3‒4), 253‒291(2010).
  • [15] R. Möller, M. Horst, and D. Fleer, “Illumination tolerance for visual navigation with the holistic min-warping method”, Robotics, 3(1), 22‒67 (2014).
  • [16] D. Fleer and R. Möller, “Comparing holistic and feature-based visual methods for estimating the relative pose of mobile robots”, Robot. Auton. Syst. 89, 51‒74 (2017).
  • [17] Q. Zhu, C. Liu, and C. Cai, “A novel robot visual homing method based on SIFT features”, Sensors, 15(10), 26063‒26084 (2015).
  • [18] D. G. Lowe, “Distinctive image features from scale-invariant keypoints”, Int. J. Comput. Vis. 60(2), 91–110 (2004).
  • [19] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF)”, Comput. Vis. Image Underst. 110(3), 346‒359 (2006).
  • [20] A. Ramisa, A. Goldhoorn, D. Aldavert, R. Toledo, and R. Mantaras, “Combining invariant features and the ALV homing method for autonomous robot navigation based on Panoramas”, J. Intell. Robot. Syst. 64(3‒4), 625‒649 (2011).
  • [21] Q. Zhu, C. Liu, and C. Cai, “A robot navigation algorithm based on sparse landmarks”, 6th IEEE. Conf. Intell. Human-Machine Syst. Cybern. (IHMSC), pp. 188‒193 (2014).
  • [22] Q. Zhu, X. Liu, and C. Cai, “Improved feature distribution for robot homing”, IFAC Proceedings Volumes, 47(3), 5721‒5725(2014).
  • [23] Q. Zhu, X. Liu, and C. Cai, “Feature optimization for longrange visual homing in changing environments”, Sensors, 14(2), 3342‒3361 (2014).
  • [24] D. Churchill and A. Vardy, “An orientation invariant visual homing algorithm”, J. Intell. Robot. Syst. 71(1), 3–29 (2013).
  • [25] C. Lee and D.E. Kim, “Local homing navigation based on the moment model for landmark distribution and features”, Sensors, 17(11), 2658 (2017).
  • [26] S.E. Yu, C. Lee, and D.E. Kim, “Analyzing the effect of landmark vectors in homing navigation”, Adapt. Behav. 20(5), 337‒359 (2012).
  • [27] J. Yan, L. Kong, Z. Diao et al., “Panoramic stereo imaging system for efficient mosaicking: parallax analyses and system design”, Appl. Optics. 57(3), 396‒403 (2018).
  • [28] J. Luo and O. Gwun, “A Comparison of SIFT, PCA-SIFT and SURF”, Int. J. Image Proc. 3(4), 143‒152 (2013).
Uwagi
EN
This work is partially supported by the National Natural Science Foundation of China (61673129, 51674109)
PL
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-fcdd990b-ce8b-4b03-905f-79584b759959
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.