PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Homography augmented particle filter SLAM

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The article presents a comprehensive study of a visual-inertial simultaneous localization and mapping (SLAM) algorithm designed for aerial vehicles. The goal of the research is to propose an improvement to the particle filter SLAM system that allows for more accurate and robust navigation of unknown environments. The authors introduce a modification that utilizes a homography matrix decomposition calculated from the camera frame-to-frame relationships. This procedure aims to refine the particle filter proposal distribution of the estimated robot state. In addition, the authors implement a mechanism of calculating a homography matrix from robot displacement, which is utilized to eliminate outliers in the frame-to-frame feature detection procedure. The algorithm is evaluated using simulation and real-world datasets, and the results show that the proposed improvements make the algorithm more accurate and robust. Specifically, the use of homography matrix decomposition allows the algorithm to be more efficient, with a smaller number of particles, without sacrificing accuracy. Furthermore, the incorporation of robot displacement information helps improve the accuracy of the feature detection procedure, leading to more reliable and consistent results. The article concludes with a discussion of the implemented and tested SLAM solution, highlighting its strengths and limitations. Overall, the proposed algorithm is a promising approach for achieving accurate and robust autonomous navigation of unknown environments.
Rocznik
Strony
423--439
Opis fizyczny
Bibliogr. 35 poz., rys., tab., wykr., wzory
Twórcy
  • Military University of Technology, Faculty of Electronics, Gen. S. Kaliskiego 2, 00-908 Warsaw, Poland
  • Military University of Technology, Faculty of Electronics, Gen. S. Kaliskiego 2, 00-908 Warsaw, Poland
Bibliografia
  • [1] Durrant-Whyte, H., & Bailey, T. A. (2006). Simultaneous localization and mapping: part I. IEEE Robotics & Automation Magazine, 13(2), 99-110. https://doi.org/10.1109/mra.2006.1638022
  • [2] Walter, M. J., Eustice, R. M., & Leonard, J. P. (2007). Exactly Sparse Extended Information Filters for Feature-based SLAM. The International Journal of Robotics Research, 26(4), 335-359. https://doi.org/10.1177/0278364906075026
  • [3] Davison, A. R., Reid, I., Molton, N., & Stasse, O. (2007). MonoSLAM: Real-Time Single Camera SLAM. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6), 1052-1067. https://doi.org/10.1109/tpami.2007.1049
  • [4] Cheng, J., Kim, J., Jiang, Z., & Yang, X. (2014). Compressed Unscented Kalman filter-based SLAM. Robotics and Biomimetics. https://doi.org/10.1109/robio.2014.7090563
  • [5] Forster, C., Pizzoli, M., & Scaramuzza, D. (2014). SVO: Fast semi-direct monocular visual odometry. International Conference on Robotics and Automation. https://doi.org/10.1109/icra.2014.6906584
  • [6] Engel, J., Stückler, J., & Cremers, D. (2015). Large-scale direct SLAM with stereo cameras. Intelligent Robots and Systems. https://doi.org/10.1109/iros.2015.7353631
  • [7] Mur-Artal, R., & Tardós, J. D. (2017). ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Transactions on Robotics, 33(5), 1255-1262. https://doi.org/10.1109/tro.2017.2705103
  • [8] Grisetti, G., Kümmerle, R., Stachniss, C., & Burgard, W. (2010). A Tutorial on Graph-Based SLAM. IEEE Intelligent Transportation Systems Magazine, 2(4), 31-43. https://doi.org/10.1109/mits.2010.939925
  • [9] Strasdat, H., Montiel, J. M. M., & Davison, A. J. (2010). Real-time monocular SLAM: Why filter? International Conference on Robotics and Automation. https://doi.org/10.1109/robot.2010.5509636
  • [10] Montemerlo, M., Thrun, S., Roller, D., & Wegbreit, B. (2003). FastSLAM 2.0: an improved particle filtering algorithm for simultaneous localization and mapping that provably converges. International Joint Conference on Artificial Intelligence, 1151-1156. https://ijcai.org/Proceedings/03/Papers/165.pdf
  • [11] Williams, B. W., Klein, G., & Reid, I. (2007). Real-Time SLAM Relocalisation. International Conference on Computer Vision. https://doi.org/10.1109/iccv.2007.4409115
  • [12] Avots, D., Lim, E., Thibaux, R., & Thrun, S. (2002). A probabilistic technique for simultaneous localization and door state estimation with mobile robots in dynamic environments. Intelligent Robots and Systems. https://doi.org/10.1109/irds.2002.1041443
  • [13] Zafari, F., Gkelias, A., & Leung, K. K. (2019). A Survey of Indoor Localization Systems and Technologies. IEEE Communications Surveys and Tutorials, 21(3), 2568-2599. https://doi.org/10.1109/comst.2019.2911558
  • [14] Tateno, K., Tombari, F., Laina, I., & Navab, N. (2017). CNN-SLAM: Real-Time Dense Monocular SLAM with Learned Depth Prediction. ArXiv (Cornell University). https://doi.org/10.1109/cvpr.2017.695
  • [15] Mur-Artal, R., Montiel, J. M. M., & Tardós, J. D. (2015). ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Transactions on Robotics, 31(5), 1147-1163. https://doi.org/10.1109/tro.2015.2463671
  • [16] Yuan, D., Qin, Y., Shen, X., & Wu, Z. (2021). A feedback weighted fusion algorithm with dynamic sensor bias correction for gyroscope array. Metrology and Measurement Systems, 28(1), 161-179. https://doi.org/10.24425/mms.2021.136000
  • [17] Alhassan, H. M., & Ghahremani, N. A. (2021). A new predictive filter for nonlinear alignment model of stationary MEMS inertial sensors. Metrology and Measurement Systems, 28(4), 673-691. https://doi.org/10.24425/mms.2021.137702
  • [18] Stawowy, M., Duer, S., Paś, J. & Wawrzyński, W. (2021). Determining Information Quality in ICT Systems. Energies, 14(17). 1-18. https://doi.org/10.3390/en14175549
  • [19] Von Stumberg, L., Usenko, V. C., & Cremers, D. (2018). Direct Sparse Visual-Inertial Odometry Using Dynamic Marginalization. ArXiv (Cornell University). https://doi.org/10.1109/icra.2018.8462905
  • [20] Unser, M., Omari, S., Hutter, M., & Siegwart, R. (2015). Robust visual inertial odometry using a direct EKF-based approach. 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). https://doi.org/10.1109/iros.2015.7353389
  • [21] Yin, H., Li, S., Tao, Y., Guo, J., & Huang, B. (2022). Dynam-SLAM: An Accurate, Robust Stereo Visual-Inertial SLAM Method in Dynamic Environments. IEEE Transactions on Robotics, 39(1), 289-308. https://doi.org/10.1109/tro.2022.3199087
  • [22] Kerl, C., Sturm, J., & Cremers, D. (2013). Dense visual SLAM for RGB-D cameras. Intelligent Robots and Systems. https://doi.org/10.1109/iros.2013.6696650
  • [23] Schops, T., Sattler, T., & Pollefeys, M. (2019). BAD SLAM: Bundle Adjusted Direct RGB-D SLAM. Computer Vision and Pattern Recognition. https://doi.org/10.1109/cvpr.2019.00022
  • [24] Słowak, P., & Kaniewski, P. (2021). Stratified Particle Filter Monocular SLAM. Remote Sensing, 13(16), 3233. https://doi.org/10.3390/rs13163233
  • [25] Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J. L., Reid, I. R., & Leonard, J. P. (2016). Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age. IEEE Transactions on Robotics, 32(6), 1309-1332. https://doi.org/10.1109/tro.2016.2624754
  • [26] Barros, A. M., Michel, M., Moline, Y., Corre, G., & Carrel, F. (2022). A Comprehensive Survey of Visual SLAM Algorithms. Robotics, 11(1), 24. https://doi.org/10.3390/robotics11010024
  • [27] Murphy, K. (1999). Bayesian Map Learning in Dynamic Environments. Neural Information Processing Systems, 12, 1015-1021. https://papers.nips.cc/paper/1716-bayesian-map-learning-in-dynamic-environments.pdf
  • [28] Montiel, J. M. M., Civera, J., & Davison, A. J. (2006). Unified Inverse Depth Parametrization for Monocular SLAM. Robotics: Science and Systems. https://doi.org/10.15607/rss.2006.ii.011
  • [29] Bay, H., Tuytelaars, T., & Van Gool, L. (2006). SURF: Speeded Up Robust Features. Lecture Notes in Computer Science, 404-417. https://doi.org/10.1007/11744023_32
  • [30] Leutenegger, S., Chli, M., & Siegwart, R. (2011). BRISK: Binary Robust invariant scalable keypoints. International Conference on Computer Vision. https://doi.org/10.1109/iccv.2011.6126542
  • [31] Lowe, D. J. (2004). Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision, 60(2), 91-110. https://doi.org/10.1023/b:visi.0000029664.99615.94
  • [32] Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). ORB: An efficient alternative to SIFT or SURF. International Conference on Computer Vision. https://doi.org/10.1109/iccv.2011.6126544
  • [33] Page, G. (2005). Multiple View Geometry in Computer Vision, by Richard Hartley and Andrew Zisserman, CUP, Cambridge. Robotica, 23(2), 271. https://doi.org/10.1017/s0263574705211621
  • [34] Malis, E., & Vargas Villanueva, M. (2007). Deeper understanding of the homography decomposition for vision-based control. INRIA. https://hal.inria.fr/inria-00174036/document
  • [35] Koenig, N., & Howard, A. (2004, September). Design and use paradigms for gazebo, an open-source multi-robot simulator. In 2004 IEEE/RSJ international conference on intelligent robots and systems (IROS)(IEEE Cat. No. 04CH37566) (Vol. 3, pp. 2149-2154). IEEE. https://doi.org/10.1109/IROS.2004.1389727
Uwagi
1. This work was supported by the Military University of Technology, Poland, under research project UGB 22-866.
2. Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-6f2d8040-4935-4acf-9490-f257746b0c46
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.