Tytuł artykułu
Autorzy
Treść / Zawartość
Pełne teksty:
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
Simultaneous localisation and mapping (SLAM) is a process by which robots build maps of their environment and simultaneous-ly determine their location and orientation in the environment. In recent years, SLAM research has advanced quickly. Researchers are cur-rently working on developing reliable and accurate visual SLAM algorithms dealing with dynamic environments. The steps involved in de-veloping a SLAM system are described in this article. We explore the most-recent methods used in SLAM systems, including probabilistic methods, visual methods, and deep learning (DL) methods. We also discuss the fundamental techniques utilised in SLAM fields.
Słowa kluczowe
Czasopismo
Rocznik
Tom
Strony
451--473
Opis fizyczny
Bibliogr. 196 poz., rys., tab.
Twórcy
autor
- Faculty of Technology, Department of Electronics, University de Batna 2 - Mostefa Ben Boulaïd 53, Route de Constantine. Fésdis, Batna 05078, Algeria
autor
- Faculty of Technology, Department of Electronics, University de Batna 2 - Mostefa Ben Boulaïd 53, Route de Constantine. Fésdis, Batna 05078, Algeria
Bibliografia
- 1. Hans P Moravec. Obstacle Avoidance and Navigation by a Seeing Robot Rover in the Real World. SPittsburgh, Penna Carnegie-Mellon Univ Robot Institute. 1980.
- 2. D. Nister ON and JB. Visual odometry. Proc 2004 IEEE Comput Soc Conf Comput Vis Pattern Recognition,004 CVPR 2004. Wash-ington DC. USA. 2004;1:I–I.
- 3. Longuet-Higgins H. A computer algorithm for reconstructing a scene from two projections. Nature. 1981;293:133–5.
- 4. CG Harris JMP. 3d positional integration from image sequences. Image Vis Comput Sci Direct. 1988;6(2):87–90.
- 5. Twinanda AP, Shehata S, Mutter D, Marescaux J, De Mathelin M and NP. Endonet: a deep architecture for recognition tasks on lap-aroscopic videos. IEEE Trans Med Imaging. 2016;36(1):86–97.
- 6. Bodenstedt S, Ohnemus A, Katic D, Wekerle AL, Wagner M, Kenngott H, Muller-Stich B, Dillmann R and SS. Real-time image-based instrument clas- sification for laparoscopic surgery. 2018. preprint arXiv:1808.00178.
- 7. Yang IC, Chen S. Precision cultivation system for greenhouse production. In Intelligent Environmental Sensing. Springer Ber-lin/Heidelberg. Ger Google Sch. 2015;191–211.
- 8. Borges DL, Guedes ST, Nascimento AR, Melo-Pinto P. Detecting and grading severity of bacterial spot caused by Xanthomonas spp. in tomato (Solanum lycopersicon) fields using visible spectrum images. Comput Electron Agric. 2016;149–159.
- 9. Liu X, Zhao D, Jia W, Ji W, Ruan C, Sun Y. Cucumber fruits detec-tion in greenhouses based on instance segmentation. IEEE Ac-cess. 2019;139635–139642.
- 10. Asdemir S, Urkmez A, Inal S. Determination of body measure-ments on the Holstein cows using digital image analysis and esti-mation of live weight with regression analysis. Comput Electron Agric. 2011;76, 189–197.
- 11. Norton T, Chen C, Larsen MLV, Berckmans D. Precision livestock farming: Building ‘digital representations’ to bring the animals clos-er to the farmer. Anim 1. 2019;3:3009–3017.
- 12. Chou WC, Tsai WR, Chang HH, Lu SY, Lin KF, Lin P. Prioritization of pesticides in crops with a semi-quantitative risk ranking method for Taiwan postmarket monitoring program. J Food Drug Anal. 2019;27: 347–354.
- 13. Kalman RE. A new approach to linear filtering and prediction problems. Trans ASME. J Basic Eng. 1960;82(1):35–45.
- 14. Julier SJ, Uhlmann JK. A counter example to the theory of simulta-neous localization and map building. Proc 2001 ICRA IEEE Int Conf Robot Autom (Cat No01CH37164). Seoul. Korea (South). 2001; 4: 4238-4243. doi:101109/ROBOT2001933280
- 15. Gamini Dissanayake MWM, Newman P, Clark S, Durrant-Whyte HF, Csorba M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans Robot Autom. 2001;17(3):229–41.
- 16. Smith R, Self M, Cheeseman P. A stochastic map for uncertain spatial relationships. Mach Intell Pattern Recognit [Internet]. 1988;5:435–61. Available from: http://portal.acm.org/citation.cfm?id=57472
- 17. Moutarlier P, Chatila R. An Experimental System for Incremental Environment Modelling by an Autonomous Mobile Robot. LAAS-CNRS 7. Ave du Colonel Roche 31077 Toulouse.
- 18. Jazwinski AH. Stochastic Processes and Filtering Theory. 1970;64.
- 19. Zhu J, Zheng N, Yuan Z, Zhang QXZ and YH. A SLAM algorithm based on the central difference kaiman filter. IEEE Intell Veh Symp Xi’an. China. 2009;123–8.
- 20. Jiang X, Li T, Yu Y. A novel SLAM algorithm with Adaptive Kalman filter. ICARM 2016 Int Conf Adv Robot Mechatronics. 2016; 107–11.
- 21. Tian Y, Suwoyo H, Wang W, Mbemba D, Li L. An AEKF-SLAM Algorithm with Recursive Noise Statistic Based on MLE and EM. J Intell Robot Syst. 2020;97:339–55.
- 22. Julier SJ, Uhlmann JK. New extension of the Kalman filter to nonlinear systems. Proc Vol 3068, Signal Process Sens Fusion, Target Recognit VI. 1997;3068.
- 23. Havangi R. Robust SLAM: SLAM base on H ∞ square root un-scented Kalman filter. Nonlinear Dyn. 2016;83(1):767–79.
- 24. Bahraini M, Bozorg M, Rad A. A new adaptive UKF algorithm to improve the accuracy of SLAM. Int J Robot. 2019;5(1):35–46.
- 25. Bahraini MS. On the Efficiency of SLAM Using Adaptive Unscented Kalman Filter. Iran J Sci Technol Trans Mech Eng [Internet]. 2020;44:727–35. Available from: https://doi.org/10.1007/s40997-019-00294-z
- 26. Tang M, Chen Z, Yin F. SLAM with Improved Schmidt Orthogonal Unscented Kalman Filter. Int J Control Autom Syst. 2022;20(1598–6446):1327–35.
- 27. Liu D, Duan J and HS. A Strong Tracking Square Root Central Difference FastSLAM for Unmanned Intelligent Vehicle With Adap-tive Partial Systematic Resampling. EEE Trans Intell Transp Syst. 2016;17(11):3110–20.
- 28. Maybeck PS. Stochastic Models, Estimation, and Control. Acad Press. 1979;1:282.
- 29. Garritsen T. Using the Extended Information Filter for Localization of Humanoid Robots on a Soccer Field. 2018;1–25.
- 30. Thrun S, Liu Y, Koller D, Ng AY, Ghahramani Z, Durrant-Whyte H. Simultaneous localization and mapping with sparse extended in-formation filters. Int J Rob Res. 2004;23(7–8):693–716.
- 31. Walter MR, Eustice RM, Leonard JJ. Exactly sparse extended information filters for feature-based SLAM. Int J Rob Res. 2007;26(4):335–59.
- 32. He B, Liu Y, Dong D, Shen Y, Yan T, Nian R. Simultaneous locali-zation and mapping with iterative sparse extended information filter for autonomous vehicles. Sensors (Switzerland). 2015;15(2): 19852–79.
- 33. Zhang H, Liu Y, Tan J, Xiong N. RGB-D SLAM Combining Visual Odometry and Extended Information Filter. Sensors [Internet]. 2015;15:18742–66. Available from: www.mdpi.com/journal/sensors
- 34. Ila V, Porta JM, Andrade-Cetto J. Information-based compact pose SLAM. IEEE Trans Robot. 2010;26(1):78–93.
- 35. Del Moral P. Nonlinear filtering: Interacting particle resolution. Comptes Rendus l’Académie des Sci - Ser I - Math. 1996;2(4):555–80.
- 36. Gordon NJ, Salmond DJ, Smith AFM. Novel approach to nonline-ar/non-Gaussian Bayesian state estimation. IEE. 1993;140(2): 107–13.
- 37. Liu JS, Rong C. Sequential Monte Carlo methods for dynamic systems. J Am Stat Assoc. 1998;93(443):1032–1044.
- 38. Øivind Skare EB and LH. Improved Sampling-Importance Resampling and Reduced Bias Importance Sampling. Scand J Stat. 2003;30(4):719-737.
- 39. Bruno MGS. Regularized Particle Filters. Seq Monte Carlo Meth-ods Nonlinear Discret Filtering Synth Lect Signal Process Springer. 2013.
- 40. Blackwell D. Conditional Expectation and Unbiased Sequential Estimation. Ann Math Stat. 1947;18(1):105–10.
- 41. Doucet A, Murphy K, Berkeley UC. Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks. 1999.
- 42. Murphy K SR. Rao-Blackwellised Particle Filtring for Dynamic Bayesian Networks. Springer New York. 2001;43(2):499–515.
- 43. Montemerlo M, Thrun S, Koller D, Wegbreit B. FastSLAM: A Fac-tored Solution to the Simultaneous Localization and Mapping Prob-lem. Eighteenth Natl Conf Artif Intell Menlo Park. 2002;593–598.
- 44. Montemerlo M, Thrun S, Siciliano B. FastSLAM:A Scalable Method for the Simultaneous Localization and Mapping Problem in Robot-ics. Springer. 2007;27.
- 45. Michael M, Thrun S, Koller D, Wegbreit B. FastSLAM 2.0: An Improved Particle Filtering Algorithm for Simultaneous Localization and Mapping that Provably Converges. IJCAI’03 Proc 18th Int Jt Conf Artif Intell. 2003;1151–6.
- 46. Kim C, Sakthivel R, Chung WK. Unscented FastSLAM : A Robust Algorithm for the Simultaneous Localization and Mapping Problem. 2008.
- 47. Eliazar A, Parr R. DP-SLAM: Fast, robust simultaneous localization and mapping without predetermined landmarks. IJCAI Int Jt Conf Artif Intell. 2003;1135–42.
- 48. Eliazar AI, Parr R. DP-SLAM 2.0. Dep Comput Sci Duke Univ North Carolina 27708.
- 49. Zikos N, Petridis V. 6-DoF Low Dimensionality SLAM (L-SLAM). J Intell Robot Syst. 2015;79:55–72.
- 50. Nie F, Zhang W, Yao Z, Shi Y, Li F, Huang Q. LCPF: A Particle Filter Lidar SLAM System with Loop Detection and Correction. IEEE Access. 2020;8:20401–12.
- 51. Hua J, Cheng M. Improved UFastSLAM algorithm based on parti-cle filter. IEEE 9th Jt Int Inf Technol Artif Intell Conf. 2020;(2693–2865):1050–5.
- 52. Lin M, Member S, Canjun Yang, Li D. An Improved Transformed Unscented FastSLAM with Genetic Resampling. IEEE Trans Ind Electron. 2019;66(5):3583–94.
- 53. Tang M, Chen Z, Yin F. An Improved Adaptive Unscented FastSLAM with Genetic Resampling. Int J Control Autom Syst. 2021;19(4):1677–90.
- 54. Lu F, Milios E. Globally Consistent Range Scan Alignment for Environment Mapping. Auton Robots. 1997;4(4):333–49.
- 55. Thrun S. The GraphSLAM Algorithm with Applications to Large-Scale Mapping of Urban Structures. Int J Rob Res. 1998;25: 403–29.
- 56. Grisetti G, Stachniss C, Grzonka S, Burgard W. A tree parameteri-zation for efficiently computing maximum likelihood maps using gradient descent. Robot Sci Syst. 2008;3:65–72.
- 57. Frese U. Treemap: An O(log n) algorithm for indoor simultaneous localization and mapping. Auton Robots. 2006;103–22.
- 58. Grisetti G, Kümmerle R, Stachniss C, Frese U, Hertzberg C. Hier-archical optimization on manifolds for online 2D and 3D mapping. Proc - IEEE Int Conf Robot Autom. 2010;273–8.
- 59. Kaess M, Johannsson H, Roberts R, Ila V, Leonard JJ, Dellaert F. ISAM2: Incremental smoothing and mapping using the Bayes tree. Int J Rob Res. 2012;31(2):216–35.
- 60. Rainer K, Grisetti G, Hauke S, Kurt. K, Abstract—Many WB. g2o: A General Framework for Graph Optimization Rainer. IEEE Int Conf Robot Autom Shanghai Int Conf Cent. 2011;3607–13.
- 61. Dellaert F. Factor Graphs and GTSAM. A hands-on Introd Tech Rep (Georgia Tech, Atlanta 2012) [Internet]:1–27. Available from: http://tinyurl.com/gtsam.
- 62. Agarwal P, Tipaldi GD, Spinello L, Stachniss C, Burgard W. Robust map optimization using dynamic covariance scaling. Proc - IEEE Int Conf Robot Autom. 2013.
- 63. Strasdat H, Davison AJ, Montiel JMM, Konolige K. Double window optimisation for constant time visual SLAM. Int Conf Comput Vis. 2011.
- 64. M. Ruhnke R. Kümmerle G, Grisetti WB. Highly accurate 3D surface models by sparse surface adjustment. IEEE Int Conf Robot Autom. 2012;(10.1109/ICRA.2012.6225077).
- 65. Stachniss C, Leonard JJ, Thrun S. Simultaneous Localization and Mapping. In: Multimedia Contents 1153 springer Handbook Robot-ics Part E/46. 2016;1153–75.
- 66. Zhao L, Huang S, Dissanayake G. Linear SLAM: Linearising the SLAM problems using submap joining. Automatica. 2018;1–22.
- 67. Holder M, Hellwig S, Winner H. Real-time pose graph SLAM based on radar. IEEE Intell Veh Symp. 2019.
- 68. Youyang F, Qing W, Gaochao Y. Incremental 3-D pose graph optimization for SLAM algorithm without marginalization. Int J Adv Robot Syst. 2020;1–14.
- 69. Fan T, Wang H, Rubenstein M, Murphey T. Cpl-slam: Efficient and certifiably correct planar graph-based slam using the complex number representation. IEEE Trans Robot. 2020;36(6):1719–37.
- 70. Sun Z, Wu B, Xu CZ, Sarma SE, Yang J, Kong H. Frontier Detec-tion and Reachability Analysis for Efficient 2D Graph-SLAM Based Active Exploration. IEEE/RSJ Int Conf Intell Robot Syst. 2020;2051–8.
- 71. Pierzchała M, Giguère P, Astrup R. Mapping forests using an unmanned ground vehicle with 3D LiDAR and graph-SLAM. Com-put Electron Agric. 2018;145:217–25.
- 72. Press W, Keukolsky S WV and BF. Levenberg Marquardt Method. Numer Recipes C Art Sci Comput. 1992;542–54.
- 73. Shum HY, Ke Q and ZZ. Efficient Bundle Adjustment with Virtual Key Frames: A Hierarchical Approach to Multi-frame Structure from Motion. IEEE Comput Soc Conf Comput Vis Pattern Recognition. 1999.
- 74. Hartley R, Zisserman A. Multiple View Geometry in Computer Vision. Cambridge Univ Press. 2000;18.
- 75. Melbouci K, Collette SN, Gay-Bellile V, Ait-Aider O, Carrier M, Dhome M. Bundle adjustment revisited for SLAM with RGBD sen-sors. Proc 14th IAPR Int Conf Mach Vis Appl MVA. 2015;166–9.
- 76. Frost D, Prisacariu V, Murray D. Recovering Stable Scale in Mo-nocular SLAM Using Object-Supplemented Bundle Adjustment. IEEE Trans Robot. 2018;34(3):1–11.
- 77. Schops T, Sattler T, Pollefeys M. Bad slam: Bundle adjusted direct RGB-D slam. IEEE/CVF Conf Comput Vis Pattern Recognit. 2019;134–44.
- 78. Zhao Y, Smith JS, Vela PA. Good Graph to Optimize: Cost-Effective, Budget-Aware Bundle Adjustment in Visual SLAM. Com-put Vis Pattern Recognit [Internet]. 2020;1–20. Available from: http://arxiv.org/abs/2008.10123
- 79. Wang K, Ma S, Ren F, Lu J. SBAS: Salient Bundle Adjustment for Visual SLAM. J LATEX Cl FILES.arxiv201211863v1[csRO]. 2015;14(8):1–11.
- 80. Campos C, Elvira R, Rodriguez JJG, Montiel JMM, Tardos JD. ORB-SLAM3: An Accurate Open-Source Library for Visual. Visual-Inertial and Multimap SLAM. IEEE Trans Robot. 2021;37(6): 1874–90.
- 81. Gonzalez M, Marchand E, Kacete A, Royan J. S3LAM: Structured Scene SLAM. Robotics [Internet]. 2022. Available from: http://arxiv.org/abs/2109.07339
- 82. Tanaka T, Sasagawa Y, Okatani T. Learning to Bundle-adjust: A Graph Network Approach to Faster Optimization of Bundle Adjust-ment for Vehicular SLAM. Proc IEEE Int Conf Comput Vis. 2021;6230–9.
- 83. Rosten E, Drummond T. Machine Learning for High-Speed Corner Detection. Leonardis A, Bischof H, Pinz A Comput Vis – ECCV 2006ECCV 2006 Lect Notes Comput Sci Springer. Berlin. Heidelb. 2006;3951:430–43.
- 84. Bay H, Ess A, Tuytelaars T, Gool L Van. Speeded-Up Robust Features ( SURF ). Comput Vis Image Underst. 2008;110(3): 346–59.
- 85. Calonder M, Lepetit V, Strecha C, Fua P. BRIEF: Binary robust independent elementary features. ECCV 2010 Lect Notes Comput Sci Springer. Berlin. Heidelberg. 2010;6314:778–92.
- 86. E. Rublee, V. Rabaud KK and GB. ORB: an efficient alternative to SIFT or SURF. Int Conf Comput Vision. Barcelona. Spain. 2011;2564–71.
- 87. Harris C, Stephens M. A Combined Corner and Edge Detector. Proc 4th Alvey Vis Conf. 1988;147--151.
- 88. Civera J, Lee SH. RGB-D Odometry and SLAM. Rosin, P, Lai, YK, Shao, L, Liu, Y RGB-D Image Anal Process Adv Comput Vis Pat-tern Recognition Springer. Cham. 2019;117–144.
- 89. Davison AJ, Reid ID NDM, Stasse O. Monoslam: real-time single camera SLAM. Pattern Anal Mach Intell IEEE. 2007;29(6): 1052–67.
- 90. Davison AJ. Real-time simultaneous localisation and mapping with a single camera. Proc Ninth IEEE Int Conf Comput Vision. Nice. Fr. 2003;2:1403–10.
- 91. Klein G, Murray D. Parallel tracking and mapping for small AR workspaces. 2007 6th IEEE ACM Int Symp Mix Augment Reality. ISMAR. 2007;225–34.
- 92. Klein G, Murray D. Parallel tracking and mapping on a camera phone. th IEEE Int Symp Mix Augment Reality. Orlando FL. USA. 2009. 2009;83–6.
- 93. Endres F, Hess J, Engelhard N, Sturm J DC and WB. An evalua-tion of the RGB-D SLAM system. IEEE Int Conf Robot Autom Saint Paul. MN. USA. 2012;3(c):1691–6.
- 94. Mur-Artal R, Montiel JMM, Tardos JD. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans Robot. 2015;31(5):1147–63.
- 95. Tardos DG-L and JD. Bags of Binary Words for Fast Place Recog-nition in Image Sequences. IEEE Trans Robot. 28(5):1188–97.
- 96. Strasdat H, Davison AJ, Montiel. JMM. Scale Drift-Aware Large Scale Monocular SLAM. Robot Sci Syst. 2010.
- 97. Mei C, Sibley G, Newman P. Closing loops without places. IEEE/RSJ 2010 Int Conf Intell Robot Syst IROS 2010 - Conf Proc. 2010;3738–44.
- 98. Mur-Artal R, Tardós JD. ORB-SLAM: Tracking and Mapping Rec-ognizable Features. Conf Work Multi VIew Geom Robot - RSS 2014 [Internet]. 2014. Available from: http://vindelman.technion.ac.il/events/mvigro/MurArtal14rss_ws.pdf
- 99. Mur-Artal R, Tardos JD. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras. IEEE Trans Robot. 2017;33(5):1255–62.
- 100. Sumikura S, ShibuyaM KS. OpenVSLAM: A versatile visual SLAM framework. MM ’19 Proc 27th ACM Int Conf Multimedia. 2019;2292–5.
- 101. Muñoz-Salinas R, Medina-Carnicer R. UcoSLAM: Simultaneous localization and mapping by fusion of keypoints and squared pla-nar markers. Comput Vis Pattern Recognit. 2019;
- 102. Sun Q, Yuan J, Zhang X, Duan F. Plane-Edge-SLAM: Seamless Fusion of Planes and Edges for SLAM in Indoor Environments. IEEE Trans Autom Sci Eng. 2021;18(4):2061–75.
- 103. Newcombe RA, Lovegrove SJ, Davison AJ. DTAM: Dense Track-ing and Mapping in Real-Time. Int Conf Comput Vision, Barcelona, Spain. 2011;2320–7.
- 104. J Engel JS and DC. Semi-Dense Visual Odometry for a Monocular Camera. IEEE Int Conf Comput Vision. Sydney. NSW. Aust. 2013;1449–56.
- 105. Engel J, Sturm J, Cremers D. LSD-SLAM: Large-Scale Direct Monocular SLAM. Proc IEEE Int Conf Comput Vis. 2013;1449–56.
- 106. Engel J, Stuckler J DC. Large-scale direct SLAM with Stereo Cameras. IEEE/RSJ Int Conf Intell Robot Syst (IROS). Hamburg Ger. 2015;1935–42.
- 107. Engel J, Cremers, Daniel, Caruso D. Large-scale direct SLAM for omnidirectional cameras. IEEE/RSJ Int Conf Intell Robot Syst (IROS). Hamburg Ger. 2015;141–8.
- 108. Forster C, Pizzoli M, Scaramuzza D. SVO : Fast Semi-Direct Monocular Visual Odometry. IEEE Int Conf Robot Autom (ICRA)Hong Kong. China. 2014;15–22.
- 109. Engel J, Koltun V, Cremers D. Direct Sparse Odometry. IEEE Trans Pattern Anal Mach Intell. 2018;40(3):611–25.
- 110. Gao X, Wang R, Demmel N, Cremers D. LDSO: Direct Sparse Odometry with Loop Closure. IEEE Int Conf Intell Robot Syst Spain. 2018;2198–204.
- 111. Sheng C, Pan S, Gao W, Tan Y, Zhao T. Dynamic-DSO: Direct sparse odometry using objects semantic information for dynamic environments. Appl Sci. 2020;10(4):1–20.
- 112. Newcombe RA, Izadi S, Hilliges O, Molyneaux D, Kim D, Davison AJ et al. KinectFusion: Real-time dense surface mapping and tracking. 210th IEEE Int Symp Mix Augment Reality. Basel. Swit-zerland. 2011;127–36.
- 113. Concha A, Civera J. RGBDTAM: A cost-effective and accurate RGB-D tracking and mapping system. IEEE Int Conf Intell Robot Syst Concha J Civera. RGBDTAM A cost-effective accurate RGB-D Track Mapp Syst 2017 IEEE/RSJ Int Conf Intell Robot Syst (IROS). Vancouver. 2017;6756–63.
- 114. Fontán A JC and RT. Information-Driven Direct RGB-D Odometry. IEEE/CVF Conf Comput Vis Pattern Recognit (CVPR). Seattle. WA. USA. 2020;4928–36.
- 115. Ma L, Kerl C, Stückler J, Cremers D. CPA-SLAM: Consistent Plane-Model Alignment for Direct RGB-D SLAM. IEEE Int Conf Robot Autom (ICRA). Stock Sweden [Internet]. 2016;1:1285–91. Available from: https://pdfs.semanticscholar.org/d41a/4ab403d6c7611047f83f575cf4c16bfd5282.pdf
- 116. Dai A, Nießner M, Zolloer M, Izadi S and C. BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Sur-face Re-integration. IEEE Int Conf Progr Compr. 2022;1(1):19.
- 117. Hsiao M, Westman E, Zhang G, Kaess M. Keyframe-based dense planar SLAM. IEEE Int Conf Robot Autom (ICRA). Singapore. 2017;5110–7.
- 118. Dong X, Cheng L, Peng H, Li T. FSD-SLAM: a fast semi-direct SLAM algorithm. Complex Intell Syst [Internet]. 2022;8:1823–34. Available from: https://doi.org/10.1007/s40747-021-00323-y
- 119. Bloesch M, Omari S, Hutter M, Siegwart R. Robust visual inertial odometry using a direct EKF-based approach. IEEE/RSJ Int Conf Intell Robot Syst (IROS). Hamburg Ger. 2015;298–304.
- 120. Sun K, Mohta K, Pfrommer B, Watterson M, Liu S, Mulgaonkar Y, et al. Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight. IEEE Robot Autom Lett. 2018;3(2):965–72.
- 121. Mourikis AI, Roumeliotis SI. A multi-state constraint Kalman filter for vision-aided inertial navigation. Proc 2007 IEEE Int Conf Robot Autom Rome. Italy. 2007;3565–72.
- 122. Leutenegger S, Lynen S, Bosse M, Siegwart R, Furgale P. Keyframe-Based Visual-Inertial Odometry Using Nonlinear Optimi-zation. Int J Rob Res. 2014;34(3):1–26.
- 123. Schneider T, Dymczyk M, Fehr M, Egger K, Lynen S, Gilitschenski I et al. Maplab: An Open Framework for Research in Visual-Inertial Mapping and Localization. IEEE Robot Autom Lett. 2018;3(3):1418–25.
- 124. Liu H, Chen M, Zhang G, Bao H, Bao Y. ICE-BA: Incremental, Consistent and Efficient Bundle Adjustment for Visual-Inertial SLAM. IEEE/CVF Conf Comput Vis Pattern Recognition. Salt Lake City. UT USA. 2018;1974–82.
- 125. Forster C, Carlone L, Dellaert F, Scaramuzza D. On-Manifold Preintegration for Real-Time Visual-Inertial Odometry. Iin IEEE Trans Robot. 2017;33(1):1–20.
- 126. Von Stumberg L, Usenko V, Cremers D. Direct Sparse Visual-Inertial Odometry Using Dynamic Marginalization. IEEE Int Conf Robot Autom (ICRA). Brisbane QLD. Aust. 2018;2510–7.
- 127. Qin T, Li P, Shen S. VINS-Mono: A Robust and Versatile Monocu-lar Visual-Inertial State Estimator. IEEE Trans Robot. 2018;34(4):1004–20.
- 128. Yang Z, Shen S. Monocular visual-inertial state estimation with online initialization and camera-IMU extrinsic calibration. IEEE Trans Autom Sci Eng. 2017;14(1):39–51.
- 129. Mur-Artal R, Tardos JD. Visual-Inertial Monocular SLAM with Map Reuse. IEEE Robot Autom Lett. 2017;2(2):796–803.
- 130. He Y, Zhao J, Guo Y, He W, Yuan K. PL-VIO: Tightly-coupled monocular visual–inertial odometry using point and line features. Sensors (Switzerland). 2018;18(4):1–25.
- 131. Zheng F, Tsai G, Zhang Z, Liu S, Chu CC, Hu H. Trifo-VIO: Robust and Efficient Stereo Visual Inertial Odometry Using Points and Lines. IEEE/RSJ Int Conf Intell Robot Syst (IROS). Madrid Spain. 2018;3686–93.
- 132. Li X, Li Y, Ornek EP, Lin J, Tombari F. Co-Planar Parametrization for Stereo-SLAM and Visual-Inertial Odometry. IEEE Robot Autom Lett. 2020;5(4):6972–9.
- 133. Rosinol A, Sattler T, Pollefeys M, Carlone L. Incremental visual-inertial 3d mesh generation with structural regularities. Int Conf Robot Autom (ICRA). Montr QC. Canada. 2019;8220–6.
- 134. Seiskari O, Rantalankila P, Kannala J, Ylilammi J, Rahtu E, Solin A. HybVIO: Pushing the Limits of Real-time Visual-inertial Odome-try. IEEE/CVF Winter Conf Appl Comput Vis (WACV). Waikoloa HI, USA. 2022;287-296.
- 135. Kaushik V, Jindgar K, Lall B. ADAADepth: Adapting data augmen-tation and attention for self-supervised monocular depth estima-tion. IEEE Robot Autom Lett. 2021;6(4):7791–8.
- 136. Tateno K, Tombari F, Laina I NN. CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction. Comput Vis Pat-tern Recognit. 2017;6243–52.
- 137. Bloesch M, Czarnowski J, Clark R, Leutenegger S AJD. CodeSLAM-Learning a Compact. Optimisable Representation for Dense Visual SLAM. 2018;2560–8. Available from: http://openaccess.thecvf.com/content_cvpr_2018/papers/Bloesch_CodeSLAM_--_Learning_CVPR_2018_paper.pdf
- 138. Mohanty V, Agrawal S, Datta S, Ghosh A, Vishnu Dutt Sharma DC. DeepVO: A Deep Learning approach for Monocular Visual Odome-try. 2016. Available from: http://arxiv.org/abs/1611.06069
- 139. Li R, Wang S, Long Z, Gu D. UnDeepVO: Monocular Visual Odometry Through Unsupervised Deep Learning. IEEE Int Conf Robot Autom (ICRA). Brisbane QLD. Aust. 2018;7286–91.
- 140. Yang N, Von Stumberg L, Wang R, Cremers D. D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2020;1278–89.
- 141. Yin Z, Shi J. GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose http: 2018;1983–92. Available from: geonet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose http://arxiv.org/abs/1803.02276v2
- 142. Zhao C, Sun L, Purkait P, Duckett T, Stolkin R. Learning Monocu-lar Visual Odometry with Dense 3D Mapping from Dense 3D Flow. IEEE/RSJ Int Conf Intell Robot Syst (IROS). Madrid Spain. 2018;6864–71.
- 143. Zhou T, Brown M, Snavely N DGL. Unsupervised Learning of Depth and Ego-Motion from Video. CEEE Conf Comput Vis Pattern Recognit (CVPR), Honolulu, HI, USA [Internet]. 2017;6612–9. Available from: https: //github.com/tinghuiz/SfMLearner.%0A2
- 144. Zagoruyko S, Komodakis N. Learning to Compare Image Patches via Convolutional Neural Networks Sergey. IEEE Conf Comput Vis Pattern Recognit (CVPR). Boston MA. USA. 2015;4353–61.
- 145. G VKB, Carneiro G, Reid I. Learning Local Image Descriptors with Deep Siamese and Triplet Convolutional Networks by Minimizing Global Loss Functions. IEEE Conf Comput Vis Pattern Recognit (CVPR)IEEE Conf Comput Vis Pattern Recognit (CVPR). Las Ve-gas NV. USA [Internet]. 2016;5385–94. Available from: http://openaccess.thecvf.com/content_cvpr_2016/supplemental/G_Learning_Local_Image_2016_CVPR_supplemental.pdf
- 146. Mayer N, Ilg E, Hausser P, Fischer P. A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation. IEEE Conf Comput Vis Pattern Recognit (CVPR). Las Vegas NV. USA. 2016;4040–8.
- 147. Tankovich V, Häne C, Zhang Y, Kowdle A, Fanello S, Bouaziz S. HitNet: Hierarchical Iterative Tile Refinement Network for Real-time Stereo Matching. IEEE/CVF Conf Comput Vis Pattern Recognit (CVPR). Nashville TN. USA. 2021;14357–67.
- 148. Huang PH, Matzen K, Kopf J, Ahuja N, Huang J Bin. DeepMVS: Learning Multi-view Stereopsis. IEEE/CVF Conf Comput Vis Pat-tern Recognition. Salt Lake City UT. USA. 2018;2821–30.
- 149. Song X, Zhao X, Hu H, Fang L. EdgeStereo: A Context Integrated Residual Pyramid Network for Stereo Matching. Comput Vis – ACCV 2018 ACCV 2018 Lect Notes Comput Sci. 2018;arXiv:1803.05196.
- 150. Shao C, Zhang C, Fang Z, Yang G. A Deep Learning-Based Semantic Filter for RANSAC-Based Fundamental Matrix Calcula-tion and the ORB-SLAM System. IEEE Access. 2020;8:3212–23.
- 151. Zhang W, Liu G, Tian G. A Coarse to Fine Indoor Visual Localiza-tion Method Using Environmental Semantic Information. IEEE Ac-cess. 2019;7:21963–70.
- 152. Lin YF, Yang LJ, Yu CY, Peng CC, Huang DC. Object recognition and classification of 2D-SLAM using machine learning and deep learning techniques. Int Symp Comput Consum Control (IS3C). Taichung City. Taiwan. 2020;473–6.
- 153. Wang S, Clark R, Wen H, Trigoni N. End-to-End, Sequence-to-Sequence Probabilistic Visual Odometry through Deep Neural Networks. Int J Robot Res 37 [Internet]. 2018;37:513–42. Available from: doi.org/10.1177/0278364917734298
- 154. Li J, Li Z, Feng Y, Liu Y, Shi G. Development of a Human-Robot Hybrid Intelligent System Based on Brain Teleoperation and Deep Learning SLAM. IEEE Trans Autom Sci Eng. 2019;16(4):1664–74.
- 155. Lan E. A Novel Deep Learning Architecture By Integrating Visual Simultaneous Localization And Mapping (Vslam) Into Cnn For Re-al-Time Surgical Video Analysis. 19th Int Symp Biomed Imaging (ISBI). Kolkata. India. 2022;1–5.
- 156. Hu S, Li D, Tang G, Xu X. A 3D semantic visual SLAM in dynamic scenes. 6th IEEE Int Conf Adv Robot Mechatronics (ICARM). Chongqing. China. 2021;522–8.
- 157. Almalioglu Y, Saputra MRU, De Gusmao PPB, Markham A, Trigoni N. GANVO: Unsupervised deep monocular visual odometry and depth estimation with generative adversarial networks. Proc - IEEE Int Conf Robot Autom Conf Robot Autom (ICRA), Montr QC. Can-ada. 2019;5474–80.
- 158. Ban X, Wang H, Chen T, Wang Y, Xiao Y. Monocular Visual Odometry based on depth and optical flow Using deep learning. IEEE Trans Instrum Meas. 2021;70:1–19.
- 159. Liang HJ, Sanket NJ, Fermuller C, Aloimonos Y. SalientDSO: Bringing Attention to Direct Sparse Odometry. IEEE Trans Autom Sci Eng. 2019;16(4):1619–26.
- 160. Tang J, Ericson L, Folkesson J, Jensfelt P. GCNv2: Efficient Cor-respondence Prediction for Real-Time SLAM. IEEE Robot Autom Lett. 2019;4(4):3505–10.
- 161. Detone D, Malisiewicz T, Rabinovich A. SuperPoint: Self-supervised interest point detection and description. EEE/CVF Conf Comput Vis Pattern Recognit Work (CVPRW). Salt Lake City UT. USA. 2018;337–49.
- 162. Kwang Moo Yi, Eduard Trulls, Vincent Lepetit PF. LIFT: Learned Invariant Feature Transform Kwang. Springer Int Publ AG 2016.
- 163. Ganti P, Waslander S. Network uncertainty informed semantic feature selection for visual SLAM. 16th Conf Comput Robot Vis (CRV) Kingston QC. Canada. 2019;121–8.
- 164. Gu X, Wang Y, Ma T. DBLD-SLAM: A Deep-Learning Visual SLAM System Based on Deep Binary Local Descriptor. Int Conf Control Autom Inf Sci (ICCAIS). Xi’an China. 2021;325–30.
- 165. Krishnan KS, Sahin F. ORBDeepOdometry - A feature-based deep learning approach to monocular visual odometry. 14th Annu Conf Syst Syst Eng (SoSE). Anchorage AK. USA. 2019;296–301.
- 166. Huang Z, Wang X, Huang L, Huang C, Wei Y, Liu W. CCNet: Criss-cross attention for semantic segmentation. IEEE Trans Pat-tern Anal Mach Intell. 2019;603–12.
- 167. Qin Z, Wang J, Lu Y. MonoGRNet: A General Framework for Monocular 3D Object Detection. IEEE Trans Pattern Anal Mach In-tell. 2021;44(9):5170–84.
- 168. Ronald Clark, Sen Wang, Hongkai Wen, Andrew Markham NT. VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem. Proc Thirty-First AAAI Conf Artif Intell. 2017;31(1):3995–4001.
- 169. G. Costante, M. Mancini PV and TAC. "Exploring Representation Learning With CNNs for Frame-to-Frame Ego-Motion Estimation,. IEEE Robot Autom Lett. 2016;1:18-25.
- 170. Gu X. DBLD-SLAM : A Deep-Learning Visual SLAM System Based on Deep Binary Local Descriptor. 2021;325–30.
- 171. Vijayanarasimhan S, Ricco S, Schmid C. SfM-Net: Learning of Structure and Motion from Video. 2017. arXiv preprint arXiv:1704.07804.
- 172. Konda K, Memisevic R. Learning visual odometry with a convolu-tional network. Proc ofthe 10th Int Conf Comput Vis Theory Appl. 2015;1:486–90.
- 173. Wang S, Clark R, Wen H, Trigoni N. DeepVO: Towards end-to-end visual odometry with deep Recurrent Convolutional Neural Net-works. IEEE Int Conf Robot Autom (ICRA). Singapore. 2017; 2043–50.
- 174. Clark R, Wang S, Markham A, Trigoni N, Wen H. VidLoc : A Deep Spatio-Temporal Model for 6-DoF Video-Clip Relocalization. IEEE Conf Comput Vis Pattern Recognit (CVPR). Honolulu HI. USA. 2017;2652–60.
- 175. Mahattansin N, Sukvichai K PB and TI. Improving Relocalization in Visual SLAM by using Object Detection. 9th Int Conf Electr Eng Comput Telecommun Inf Technol (ECTI-CON). Pr Khiri Khan. Thail. 2022;1–4.
- 176. Li R, Liu Q, Gui J DG and HH. Indoor Relocalization in Challenging Environments With Dual-Stream Convolutional Neural Networks. IEEE Trans Autom Sci Eng. 2018;15(2):651–62.
- 177. Dong S, Fan Q, Wang H, Shi J, Yi L, Funkhouser T, et al. Robust Neural Routing Through Space Partitions for Camera Relocaliza-tion in Dynamic Indoor Environments. IEEE/CVF Conf Comput Vis Pattern Recognit (CVPR), Nashville, TN, USA. 2021;8540–50.
- 178. Nakashima R, Seki A. SIR-Net : Scene-Independent End-to-End Trainable Visual Relocalizer Ryo Nakashima. Int Conf 3D Vis (3DV). Quebec City QC. Canada. 2019;472–81
- 179. Zhou L. Visual Relocalization using Long-Short Term Memory Fully Convolutional Network. IEEE Int Symp Mix Augment Real Adjun (ISMAR-Adjunct), Munich, Ger. 2018;258–63.
- 180. Duong ND, Kacete A, Sodalie C, Oierre-Yves R JR. xyzNet: to-wards Machine learning camera relocalization by using a scene coordinate prediction network. IEEE Int Symp Mix Augment Real Adjun. 2018;2–7.
- 181. Wu X, Tian X, Zhou J, Xu P, Chen J. Loop Closure Detection for Visual SLAM Based on SuperPoint Network. 2019 Chinese Autom Congr (CAC). Hangzhou. China. 2019;3789–93.
- 182. Merrill N, Huang G. Lightweight Unsupervised Deep Loop Closure. Conf Robot Sci Syst . 2018;1–10.
- 183. Xia Y, Li J, Qi L, Fan H. Loop Closure Detection for Visual SLAM Using PCANet Features. Int Jt Conf Neural Networks (IJCNN), Vancouver BC. Canada,. 2016;2274–81.
- 184. Dai K, Cheng L, Yang R, Yan G. Loop Closure Detection Using KPCA and CNN for Visual SLAM. 40th Chinese Control Conf (CCC). Shanghai. China. 2021;8088–93.
- 185. Xiong F, Ding Y, Yu M, Zhao W NZ and PR. A Lightweight se-quence-based Unsupervised Loop Closure Detection. Int Jt Conf Neural Networks (IJCNN). Shenzhen. China. 2021;1–8.
- 186. Huang L, Zhu M, Zhang M. Visual Loop Closure Detection Based on Lightweight Convolutional Neural Network and Product Quanti-zation. IEEE 12th Int Conf Softw Eng Serv Sci (ICSESS). Beijing. China. 2021;122–6.
- 187. Zhu M, Huang L. Fast and Robust Visual Loop Closure Detection with Convolutional Neural Network. IEEE 3rd Int Conf Front Tech-nol Inf Comput (ICFTIC). Greenville SC. USA. 2021;3681–91.
- 188. Ma J, Wang S, Zhang K, He Z, Huang J XM. Fast and Robust Loop-Closure Detection via Convolutional Auto-Encoder and Mo-tion Consensus. IIEEE Trans Ind Informatics. 2022;18(6):3681–91.
- 189. Cai S, Zhou D, Guo R, Zhou H, Peng K. Implementation of Hybrid Deep Learning Architecture on Loop-Closure Detection. 2018; 521–6.
- 190. Liu Y, Xiang R, Zhang Q, Ren Z, Cheng J. Loop Closure Detection based on Improved Hybrid Deep Learning Architecture. IEEE Int Conf Ubiquitous Comput Commun Data Sci Comput Intell Smart Comput Netw Serv (SmartCNS). Shenyang. China. 2019;312–7.
- 191. Shi X, Li L. Loop Closure Detection for Visual SLAM Systems Based on Convolutional Netural Network. IEEE 24th Int Conf Comput Sci Eng (CSE). Shenyang. China. 2021;123–9.
- 192. Zhou Y, Wang Y, Poiesi F, Qin Q, Wan Y. Loop Closure Detection Using Local 3D Deep Descriptors. IEEE Robot Autom Lett. 2022;7(3):6335–42.
- 193. Osman H, Darwish N, Member S, Bayoumi A. LoopNet: Where to Focus? Detecting Loop Closures in Dynamic Scenes. IEEE Robot Autom Lett. 2022;7(2):2031–8.
- 194. Bhutta MUM, Sun Y, Lau D, Liu M, Member S. Why-So-Deep : Towards Boosting Previously Trained Models for Visual Place Recognition. 1824 IEEE Robot Autom Lett. 2022;7(2):1824–31.
- 195. Gauglitz S, Sweeney C, Ventura J MT and TH. Live Tracking and Mapping from Both General and Rotation-Only Camera Motion. IEEE Int Symp Mix Augment Real (ISMAR). Atlanta GA. USA. 2012;13–22.
- 196. Daniel HC, Kim K, Kannala J, Pulli K, Heikkilä J. DT-SLAM: De-ferred triangulation for robust SLAM. 2nd Int Conf 3D Vision. To-kyo. Japan. 2014;609–16.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-cb949861-6240-4140-9242-1b45672d771d