PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Visual simultaneous localisation and map-building supported by structured landmarks

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Visual simultaneous localisation and map-building systems which take advantage of some landmarks other than point-wise environment features are not frequently reported. In the following paper the method of using the operational map of robot surrounding, which is complemented with visible structured passive landmarks, is described. These landmarks are used to improve self-localisation accuracy of the robot camera and to reduce the size of the Kalman-filter state-vector with respect to the vector size involving point-wise environment features only. Structured landmarks reduce the drift of the camera pose estimate and improve the reliability of the map which is built on-line. Results of simulation experiments are described, proving advantages of such an approach.
Rocznik
Strony
281--293
Opis fizyczny
Bibliogr. 19 poz., rys., wykr.
Twórcy
autor
  • Institute of Control and Information Engineering, Poznań University of Technology, Pl. Marii Skłodowskiej-Curie 5, 60-965 Poznań, Poland
autor
  • Institute of Control and Information Engineering, Poznań University of Technology, Pl. Marii Skłodowskiej-Curie 5, 60-965 Poznań, Poland
Bibliografia
  • [1] Bailey, T. and Durrant-Whyte, H. (2006). Simultaneous localization and mapping (SLAM): Part II, IEEE Robotics & Automation Magazine 13(3): 108-117.
  • [2] Bar-Shalom, Y., Kirubarajan, T. and Li,X.-R. (2002). Estimation with Applications to Tracking and Navigation, John Wiley & Sons, Inc., New York, NY.
  • [3] Bączyk, R., Kasiński, A. and Skrzypczy´nski, P. (2003). Visionbased mobile robot localization with simple artificial landmarks, 7th International IFAC Symposium on Robot Control (SYROCO), Wrocław, Poland, pp. 217-222.
  • [4] Castle, R.O., Gawley, D.J., Klein, G. and Murray, D.W. (2007a). Towards simultaneous recognition, localization and mapping for hand-held and wearable cameras, International Conference on Robotics and Automation (ICRA), Rome, Italy, pp. 4102-4107.
  • [5] Castle, R.O., Gawley, D.J., Klein, G. and Murray, D.W. (2007b). Video-rate recognition and localization for wearable cameras, Proceedings of the 18th British Machine Vision Conference (BMVC), Warwick, UK, pp. 1100-1109.
  • [6] Civera, J., Davison, A. and Montiel, J. (2008). Inverse depth parametrization for monocular SLAM, IEEE Transactions on Robotics 24(5): 932-945.
  • [7] Clemente, L.A., Davison, A., Reid, I., Neira, J. and Tardos, J. (2007). Mapping large loops with a single hand-held camera, Robotics Science and Systems (RSS), Georgia Institute of Technology, Atlanta, GA.
  • [8] Davison, A. (2003). Real-time simultaneous localisation and mapping with a single camera, Ninth IEEE International Conference on Computer Vision (ICCV), Nice, France, Vol. 2, pp. 1403-1410.
  • [9] Davison, A.J. and Murray, D.W. (2002). Simultaneous localisation and map-building using active vision, IEEE Transactions on Pattern Analysis and Machine Intelligence 24(7): 865-880.
  • [10] Davison, A.J., Reid, I.D., Molton, N.D. and Stasse, O. (2007). MonoSLAM: Real-time single camera SLAM, IEEE Transactions on Pattern Analysis and Machine Intelligence 29(6): 1052-1067.
  • [11] Dissanayake, G., Newman, P., Clark, S., Durrant-Whyte, H.F. and Csorba, M. (2001). A solution to the simultaneous localization and map building (SLAM) problem, IEEE Transactions on Robotics and Automation 17(2): 229-241.
  • [12] Durrant-Whyte, H. and Bailey, T. (2006). Simultaneous localization and mapping: Part I, IEEE Robotics & Automation Magazine 13(2): 99-110.
  • [13] Eade, E. and Drummond, T. (2009). Edge landmarks in monocular SLAM, Image and Vision Computing 27(5): 588-596.
  • [14] Gee, A.P., Chekhlov, D., Calway, A. and Mayol-Cuevas, W. (2008). Discovering higher level structure in visual SLAM, IEEE Transactions on Robotics 24(5): 980-990.
  • [15] Gee, A.P. and Mayol-Cuevas,W. (2006). Real-time model-based SLAM using line segments, 2nd International Symposium on Visual Computing, Lake Tahoe, NV, USA, pp. 354-363.
  • [16] Haralick, R.M. (2000). Performance Characterization in Computer Vision, in R. Klette, H.S. Stiehl, M.A. Viergever, K.L. Vincken (Eds.) Proceedings of the Theoretical Foundations of Computer Vision, TFCV on Performance Characterization in Computer Vision, Kluwer, B.V., Deventer, pp. 95-114.
  • [17] Neira, J., Davison, A.J. and Leonard, J.J. (2008). Guest editorial special issue on visual SLAM, IEEE Transactions on Robotics 24(5): 929-931.
  • [18] Smith, P., Reid, I. and Davison, A.J. (2006). Real-time monocular SLAM with straight lines, British Machine Vision Conference (BMVC), Edinburgh, UK, Vol. 1, pp. 17-26.
  • [19] Thrun, S., Montemerlo, M., Dahlkamp, H., Stavens, D., Aron, A., Diebel, J., Fong, P., Gale, J., Halpenny, M., Hoffmann, G., Lau, K., Oakley, C.M., Palatucci, M., Pratt, V., Stang, P., Strohband, S., Dupont, C., Jendrossek, L.-E., Koelen, C., Markey, C., Rummel, C., van Niekerk, J., Jensen, E., Alessandrini, P., Bradski, G.R., Davies, B., Ettinger, S., Kaehler, A., Nefian, A.V. and Mahoney, P. (2006). Stanley: The robot that won the DARPA Grand Challenge, Journal of Field Robotics 23(9): 661-692.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BPZ1-0057-0021
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.