PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

A machine learning-based mobile robot visual homing approach

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Visual homing enables mobile robots to move towards a previously visited location solely based on panoramic vision sensors. In this paper, a SIFT-based visual homing approach incorporating machine learning is presented. The proposed approach can reduce the impact of inaccurate landmarks on the performance, and generate more precise home direction with simple model. The effectiveness of the proposed approach is verified on both panoramic image databases and actual mobile robot, experimental results reveal that compared to some traditional visual homing methods, the proposed approach exhibits better homing performance and adaptability in both static and dynamic environments.
Rocznik
Strony
621--634
Opis fizyczny
Bibliogr. 34 poz., rys., wykr., tab.
Twórcy
autor
autor
  • College of Automation, Harbin Engineering University, 150001, China
autor
autor
Bibliografia
  • [1] W. Kowalczyk, M. Michałek, and K. Kozłowski, “Trajectory tracking control with obstacle avoidance capability for unicycle-like mobile robot”, Bull. Pol. Ac.: Tech. 60(3), 537‒546 (2012).
  • [2] M. Hoy, A.S. Matveev, and A. V. Savkin, “Algorithms for collision-free navigation of mobile robots in complex cluttered environments: a survey”, Robotica 33(3), 463‒497(2015).
  • [3] U. Orozco-Rosas, O. Montiel, and R. Sepúlveda, “Pseudo-bacterial potential field based path planner for autonomous mobile robot Navigation”, Int. J. Adv. Robot. Syst. 12(81), 1‒14 (2015).
  • [4] C. Lee and D. Kim, “Local homing navigation based on the moment model for landmark distribution and features”, Sensors, 17(11), 2658 (2017).
  • [5] N. Ohnishi and A. Imiy, “Appearance-based navigation and homing for autonomous mobile robot”, Image Vis. Comput. 31(6), 511–532 (2013).
  • [6] M. Gupta, G.K. Arunkumar, and L. Vachhani, “Bearing only visual homing: Observer based approach”, in 25th Mediterranean Conf. Control Autom. (MED), pp. 358‒363 (2017).
  • [7] A. Kim and R.M. Eustice, “Active visual SLAM for robotic area coverage: Theory and experiment”, Int. J. Robot. Res. 34(4‒5), 457‒475 (2015).
  • [8] C. Gamallo, M. Mucientes, and C.V. Regueiro, “Omnidirectional visual SLAM under severe occlusions”, Robot. Auton. Syst. 65(C), 76‒87 (2015).
  • [9] E. Garcia-Fidalgo and A. Ortiz, “Vision-based topological mapping and localization methods: A survey”, Robot. Auton. Syst. 64(C), 1‒20 (2015).
  • [10] B.A. Cartwright and T.S. Collett, “Landmark learning in bees”, J. Comp. Physiol. A151, 421‑543 (1983).
  • [11] A. Denuelle and M.V. Srinivasan, “Bio-inspired visual guidance: From insect homing to UAS navigation” IEEE Int. Conf. Robot. Biomim, pp. 326‒332 (2015).
  • [12] N. Paramesh and D.M. Lyons, “Homing with stereovision”, Robotica,. 34(12), 2741‒2758 (2016).
  • [13] M.O Franz, B. Schölkopf, H.A. Mallot, and H.H Bülthoff, “Where did I take that snapshot? Scene-based homing by image matching”, Biol. Cybern. 79(3), 191–202 (1998).
  • [14] D. Lambrinos, R. Möller, T. Labhart, R. Pfeifer, and R. Wehner, “A mobile robot employing insect strategies for navigation”, Robot. Auton. Syst. 30(1), 39–64 (2000).
  • [15] T. Murray and J. Zeil, “Quantifying navigational information: The catchment volumes of panoramic snapshots in outdoor scenes”, PloS one, 12(10), e0187226 (2017).
  • [16] D. Churchill and A. Vardy, “An orientation invariant visual homing algorithm”, J. Intell. Robot. Syst. 71(1), 3–29 (2013).
  • [17] R. Möller, M. Krzykawski, and L. Gerstmayr, “Three 2D-warping schemes for visual robot navigation”, Auton Robot. 29(3‒4), 253‒291(2010).
  • [18] Q. Zhu, C. Liu, and C. Cai, “A novel robot visual homing method based on SIFT features”, Sensors, 15(10), 26063‒26084 (2015).
  • [19] R. Möller, “A SIMD implementation of the MinWarping method for local visual homing”, Bielefeld University, Germany, 2016.
  • [20] R. Möller, M. Horst, and D. Fleer, “Illumination tolerance for visual navigation with the holistic min-warping method”, Robotics,. 3(1), 22‒67 (2014).
  • [21] D. Fleer and Möller R, “Comparing holistic and feature-based visual methods for estimating the relative pose of mobile robots”, Robot. Auton. Syst. 89, 51‒74 (2017).
  • [22] R. Möller, “Column distance measures and their effect on illumination tolerance in MinWarping”, Bielefeld University, Germany, 2016.
  • [23] Q. Zhu, X. Liu, and C. Cai, “Feature optimization for longrange visual homing in changing environments”, Sensors, 14(2), 3342‒3361 (2014).
  • [24] Q. Zhu, X. Liu, and C. Cai, “Improved feature distribution for robot homing”, IFAC Proceedings Volumes, 47(3), 5721‒5725 (2014).
  • [25] Q. Zhu, C. Liu, and C. Cai, “A robot navigation algorithm based on sparse landmarks”, 6th IEEE. Conf. Intell. Human-Machine Syst. Cybern. (IHMSC), pp. 188‒193 (2014).
  • [26] S.E. Yu, C. Lee, and D.E. Kim, “Analyzing the effect of landmark vectors in homing navigation”, Adapt. Behav. 20(5), 337‒359 (2012).
  • [27] C. Lee, S.E. Yu, and D.E. Kim, “Landmark-based homing navigation using omnidirectional depth information”, Sensors, 17(8), 1928 (2017).
  • [28] M. Liu, C. Pradalier, F. Pomerleau, and R. Siegwart, “The role of homing in visual topological navigation”, in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst. (IROS), pp. 567‒572 (2012).
  • [29] A. Sabnis, G.K. Arunkumar, V. Dwaracherla, and L. Vachhani, “Probabilistic approach for visual homing of a mobile robot in the presence of dynamic obstacles”, IEEE Trans. Ind. Electron. 63(9), 5523–5533 (2016).
  • [30] D.G. Lowe, “Distinctive image features from scale-invariant keypoints”, Int. J. Comput. Vis. 60(2), 91–110 (2004).
  • [31] J. Luo and O. Gwun, “A Comparison of SIFT, PCA-SIFT and SURF”, Int. J. Image Proc. 3(4), 143‒152 (2013).
  • [32] P. Zarychta, P. Badura, and E. Pietka, “Comparative analysis of selected classifiers in posterior cruciate ligaments computer aided diagnosis”, Bull. Pol. Ac.: Tech. 65(1), 63‒70 (2017).
  • [33] J. Nayak, B. Naik, and H.S. Behera, “A comprehensive survey on support vector machine in data mining tasks: Applications & challenges”, Int. J. Database Theory Appl. 8(1), 169‒186 (2015).
  • [34] Panoramic Image Databases. Available online: http://www.ti.uni-bielefeld.de/html/research/avardy/index.html (accessed on 15 April 2017).
Uwagi
PL
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-0ce99509-ce6a-4af7-bb1b-40f69f8c3ce8
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.