PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Efficient generation of 3D surfel maps using RGB-D sensors

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The article focuses on the problem of building dense 3D occupancy maps using commercial RGB-D sensors and the SLAM approach. In particular, it addresses the problem of 3D map representations, which must be able both to store millions of points and to offer efficient update mechanisms. The proposed solution consists of two such key elements, visual odometry and surfel-based mapping, but it contains substantial improvements: storing the surfel maps in octree form and utilizing a frustum culling-based method to accelerate the map update step. The performed experiments verify the usefulness and efficiency of the developed system.
Rocznik
Strony
99--122
Opis fizyczny
Bibliogr. 57 poz., rys., tab., wykr.
Twórcy
autor
  • Industrial Research Institute for Automation and Measurements, Al. Jerozolimskie 202, 02-486 Warsaw, Poland
autor
  • Institute of Control and Computation Engineering, Warsaw University of Technology, ul. Nowowiejska 15/19, 00-665 Warsaw, Poland
  • Institute of Control and Computation Engineering, Warsaw University of Technology, ul. Nowowiejska 15/19, 00-665 Warsaw, Poland
autor
  • Institute of Control and Computation Engineering, Warsaw University of Technology, ul. Nowowiejska 15/19, 00-665 Warsaw, Poland
Bibliografia
  • [1] Alahi, A., Ortiz, R. and Vandergheynst, P. (2012). FREAK: Fast retina keypoint, 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, pp. 510–517.
  • [2] Calonder, M., Lepetit, V., Strecha, C. and Fua, P. (2010). BRIEF: Binary robust independent elementary features, Computer VisionECCV 2010, Heraklion, Greece, pp. 778–792.
  • [3] Censi, A. (2008). An ICP variant using a point-to-line metric, IEEE International Conference on Robotics and Automation, ICRA 2008, Pasadena, CA, USA, pp. 19–25.
  • [4] Clark, J.H. (1976). Hierarchical geometric models for visible surface algorithms, Communications of the ACM 19(10): 547–554.
  • [5] Dryanovski, I., Valenti, R. and Xiao, J. (2013). Fast visual odometry and mapping from RGB-D data, 2013 IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, pp. 2305–2310.
  • [6] Endres, F., Hess, J., Engelhard, N., Sturm, J., Cremers, D. and Burgard, W. (2012). An evaluation of the RGB-D SLAM system, 2012 IEEE International Conference on Robotics and Automation (ICRA), St Paul, MN, USA, pp. 1691–1696.
  • [7] Figat, J., Kornuta, T. and Kasprzak, W. (2014). Performance evaluation of binary descriptors of local features, in L.J. Chmielewski et al. (Eds.), Proceedings of the International Conference on Computer Vision and Graphics, Lecture Notes in Computer Science, Vol. 8671, Springer, Berlin/Heidelberg, pp. 187–194.
  • [8] Fischler, M.A. and Bolles, R.C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, Communications of the ACM 24(6): 381–395.
  • [9] Frome, A., Huber, D., Kolluri, R., Bülow, T. and Malik, J. (2004). Recognizing objects in range data using regional point descriptors, Computer Vision (ECCV 2004), Prague, Czech Republic, pp. 224–237.
  • [10] Gribb, G. and Hartmann, K. (2001). Fast extraction of viewing frustum planes from the world-view-projection, http://gamedevs.org.
  • [11] Handa, A., Whelan, T., McDonald, J. and Davison, A.J. (2014). A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM, 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, pp. 1524–1531.
  • [12] Henry, P., Krainin, M., Herbst, E., Ren, X. and Fox, D. (2012). RGB-D mapping: Using kinect-style depth cameras for dense 3D modeling of indoor environments, International Journal of Robotic Research 31(5): 647–663.
  • [13] Henry, P., Krainin, M., Herbst, E., Ren, X. and Fox, D. (2014). RGBD mapping: Using depth cameras for dense 3D modeling of indoor environments, in O. Khatib et al. (Eds.), Experimental Robotics, Springer, Berlin/Heidelberg, pp. 477–491.
  • [14] Holzer, S., Rusu, R., Dixon, M., Gedikli, S. and Navab, N. (2012). Adaptive neighborhood selection for real-time surface normal estimation from organized point cloud data using integral images, Intelligent Robots and Systems (IROS), Vilamoura-Algarve, Portugal, pp. 2684–2689.
  • [15] Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C. and Burgard, W. (2013). OctoMap: An efficient probabilistic 3D mapping framework based on octrees, Autonomous Robots 34(3): 189–206.
  • [16] Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A. and Fitzgibbon, A. (2011). KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera, Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST’11, Santa Barbara, CA, USA, pp. 559–568.
  • [17] Kasprzak, W. (2010). Integration of different computational models in a computer vision framework, 2010 International Conference on Computer Information Systems and Industrial Management Applications (CISIM), Cracow, Poland, pp. 13–18.
  • [18] Kasprzak, W., Pietruch, R., Bojar, K., Wilkowski, A. and Kornuta, T. (2015). Integrating data- and model-driven analysis of RGB-D images, in D. Filev et al. (Eds.), Proceedings of the 7th IEEE International Conference Intelligent Systems IS’2014, Advances in Intelligent Systems and Computing, Vol. 323, Springer, Berlin/Heidelberg, pp. 605–616.
  • [19] Kawewong, A., Tongprasit, N. and Hasegawa, O. (2013). A speeded-up online incremental vision-based loop-closure detection for long-term SLAM, Advanced Robotics 27(17): 1325–1336.
  • [20] Konolige, K. (2010). Sparse bundle adjustment, Proceedings of the British Machine Vision Conference, Aberystwyth, UK, pp. 102.1–102.11, DOI: 10.5244/C.24.102.
  • [21] Konolige, K., Agrawal, M. and Sola, J. (2011). Large-scale visual odometry for rough terrain, in M. Kaneko and Y. Nakamura (Eds.), Robotics Research, Springer, Berlin/Heidelberg, pp. 201–212.
  • [22] Koshy, G. (2014). Calculating OpenGL perspective matrix from opencv intrinsic matrix, http://kgeorge.github.io.
  • [23] Krainin, M., Henry, P., Ren, X. and Fox, D. (2010). Manipulator and object tracking for in hand 3D object modeling, Technical Report UW-CSE-10-09-01, University of Washington, Seattle, WA.
  • [24] Kummerle, R., Grisetti, G., Strasdat, H., Konolige, K. and Burgard, W. (2011). g2o: A general framework for graph optimization, 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, pp. 3607–3613.
  • [25] Leutenegger, S., Chli, M. and Siegwart, R. (2011). Brisk: Binary robust invariant scalable keypoints, 2011 IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, pp. 2548–2555.
  • [26] Lowe, D.G. (1999). Object recognition from local scale-invariant features, Proceedings of the International Conference on Computer Vision, ICCV’99, Kerkyra, Greece, Vol. 2, pp. 1150–1157.
  • [27] Luck, J., Little, C. and Hoff, W. (2000). Registration of range data using a hybrid simulated annealing and iterative closest point algorithm, ICRA’00: IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, Vol. 4, pp. 3739–3744.
  • [28] Łępicka, M., Kornuta, T. and Stefańczyk, M. (2016). Utilization of colour in ICP-based point cloud registration, in R. Burduk et al. (Eds.), Proceedings of the 9th International Conference on Computer Recognition Systems (CORES 2015), Advances in Intelligent Systems and Computing, Vol. 403, Springer, Berlin/Heidelberg, pp. 821–830.
  • [29] Mair, E., Hager, G.D., Burschka, D., Suppa, M. and Hirzinger, G. (2010). Adaptive and generic corner detection based on the accelerated segment test, Computer Vision (ECCV 2010), Hersonissos, Greece, pp. 183–196.
  • [30] Marder-Eppstein, E., Berger, E., Foote, T., Gerkey, B. and Konolige, K. (2010). The office marathon: Robust navigation in an indoor office environment, 2010 IEEE International Conference on Robotics and Automation (ICRA), Anchorage, AK, USA, pp. 300–307.
  • [31] Martínez Mozos, O., Triebel, R., Jensfelt, P., Rottmann, A. and Burgard, W. (2007). Supervised semantic labeling of places using information extracted from sensor data, Robotics and Autonomous Systems 55(5): 391–402.
  • [32] May, S., Dröschel, D., Fuchs, S., Holz, D. and Nuchter, A. (2009). Robust 3D-mapping with time-of-flight cameras, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009, St. Louis, MO, USA, pp. 1673–1678.
  • [33] Men, H., Gebre, B. and Pochiraju, K. (2011). Color point cloud registration with 4D ICP algorithm, 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, pp. 1511–1516.
  • [34] Miksik, O. and Mikolajczyk, K. (2012). Evaluation of local detectors and descriptors for fast feature matching, 21st International Conference on Pattern Recognition (ICPR), Tsukuba, Japan, pp. 2681–2684.
  • [35] Muja, M. and Lowe, D.G. (2009). Fast approximate nearest neighbors with automatic algorithm configuration, International Conference on Computer Vision Theory and Application VISSAPP’09, Lisbon, Portugal, pp. 331–340.
  • [36] Muja, M. and Lowe, D.G. (2012). Fast matching of binary features, 9th Conference on Computer and Robot Vision (CRV), Toronto, Canada, pp. 404–410.
  • [37] Nistér, D., Naroditsky, O. and Bergen, J. (2004). Visual odometry, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, Washington, DC, USA, Vol. 1, pp. I–652.
  • [38] Nowicki, M. and Skrzypczyński, P. (2014). Performance comparison of point feature detectors and descriptors for visual navigation on android platform, 2014 International Wireless Communications and Mobile Computing Conference (IWCMC), Nicosia, Cyprus, pp. 116–121.
  • [39] Pfister, H., Zwicker, M., van Baar, J. and Gross, M. (2000). Surfels: Surface elements as rendering primitives, Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH’00, New Orleans, LA, USA, pp. 335–342.
  • [40] Pomerleau, F., Colas, F., Siegwart, R. and Magnenat, S. (2013). Comparing ICP variants on real-world data sets, Autonomous Robots 34(3): 133–148.
  • [41] Quigley, M., Gerkey, B., Conley, K., Faust, J., Foote, T., Leibs, J., Berger, E., Wheeler, R. and Ng, A. (2009). ROS: An open-source robot operating system, Proceedings of the Open-Source Software Workshop at the International Conference on Robotics and Automation (ICRA), Kobe, Japan.
  • [42] Rosten, E. and Drummond, T. (2006). Machine learning for high-speed corner detection, in A. Leonardis et al. (Eds.), Computer Vision—ECCV 2006, Lecture Notes in Computer Science, Vol. 3951, Springer, Berlin/Heidelberg, pp. 430–443.
  • [43] Rusu, R.B., Bradski, G., Thibaux, R. and Hsu, J. (2010). Fast 3D recognition and pose using the viewpoint feature histogram, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, pp. 2155–2162.
  • [44] Rusu, R.B. and Cousins, S. (2011). 3D is here: Point Cloud Library (PCL), International Conference on Robotics and Automation, Shanghai, China, pp. 1–4.
  • [45] Shi, J. and Tomasi, C. (1994). Good features to track, 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR’94, Seattle, WA, USA, pp. 593–600.
  • [46] Sipiran, I. and Bustos, B. (2011). Harris 3D: A robust extension of the Harris operator for interest point detection on 3D meshes, The Visual Computer 27(11): 963–976.
  • [47] Skrzypczyński, P. (2009). Simultaneous localization and mapping: A feature-based probabilistic approach, International Journal of Applied Mathematics and Computer Science 19(4): 575–588, DOI: 10.2478/v10006-009-0045-z.
  • [48] Song-Ho, A. (2013). OpenGL projection matrix, http://www.songho.ca/opengl/gl_projectionmatrix.html.
  • [49] Steder, B., Rusu, R.B., Konolige, K. and Burgard, W. (2011). Point feature extraction on 3D range scans taking into account object boundaries, 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, pp. 2601–2608.
  • [50] Thrun, S. and Leonard, J.J. (2008). Simultaneous localization and mapping, in B. Siciliano and O. Khatib (Eds.), Handbook of Robotics, Springer, Berlin/Heidelberg, pp. 871–890.
  • [51] Tombari, F., Salti, S. and Di Stefano, L. (2010). Unique signatures of histograms for local surface description, in K. Daniilidis et al. (Eds.), Computer Vision—ECCV 2010, Lecture Notes in Computer Science, Vol. 6314, Springer-Verlag, Berlin/Heidelberg, pp. 356–369.
  • [52] Tombari, F., Salti, S. and Di Stefano, L. (2011). A combined texture-shape descriptor for enhanced 3D feature matching, 18th IEEE International Conference on Image Processing (ICIP), Brussels, Belgium, pp. 809–812.
  • [53] Triebel, R., Pfaff, P. and Burgard,W. (2006). Multi-level surface maps for outdoor terrain mapping and loop closing, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, pp. 2276–2282.
  • [54] Weise, T., Wismer, T., Leibe, B. and Van Gool, L. (2009). In-hand scanning with online loop closure, IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), Kyoto, Japan, pp. 1630–1637.
  • [55] Whelan, T., Johannsson, H., Kaess, M., Leonard, J. and McDonald, J. (2013). Robust real-time visual odometry for dense RGB-D mapping, 2013 IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, pp. 5724–5731.
  • [56] Wurm, K. M., Hornung, A., Bennewitz, M., Stachniss, C. and Burgard, W. (2010). OctoMap: A probabilistic, flexible, and compact 3D map representation for robotic systems, ICRA 2010 Workshop, Taipei, Taiwan.
  • [57] Zhang, Z. (1994). Iterative point matching for registration of free-form curves and surfaces, International Journal of Computer Vision 13(2): 119–152.
Uwagi
Opracowanie ze środków MNiSW w ramach umowy 812/P-DUN/2016 na działalność upowszechniającą naukę.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-aa65b20d-7f04-4da5-8232-35f4cb7b5271
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.