PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
  • Sesja wygasła!
Tytuł artykułu

On-line range images registration with GPGPU

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accelerometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.
Słowa kluczowe
Rocznik
Strony
52--62
Opis fizyczny
Bibliogr. 62 poz., rys., il., wykr.
Twórcy
  • Industrial Research Institute for Automation and Measurements, 202 Al. Jerozolimskie Str., 02–486 Warsaw, Poland
autor
  • Faculty of Electronics and Information Technology, Warsaw University of Technology, 15/19 Nowowiejska Str., 00–665 Warsaw, Poland
Bibliografia
  • 1. P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, „Rgb-d mapping: Using depth cameras for dense 3d modeling of in-door environments”, in: International Symposium on Experimental Robotics, 2010. DOI http://www.seattle.intel-research.net/RGBD/RGBD-RSS2010/papers/henry-RGBD10-RGBD-mapping.pdf.
  • 2. A. Nüchter, O. Wulf, K. Lingemann, J. Hertzberg, B. Wagner, and H. Surmann, „3D mapping with semantic knowledge”, Proc. Robocup Int. Symp., pp. 335-346, Osaka, 2005.
  • 3. A. Nüchter and J. Hertzberg, „Towards semantic maps for mobile robots”, Robot. Auton. Syst. 56, 915-926 (2008).
  • 4. O. Grau, „A scene analysis system for the generation of 3-D models”, Proc. IEEE Int. Conf. Recent Advances in 3-D Digital Imaging and Modeling, Washington, p. 221 (1997).
  • 5. A. Nüchter, H. Surmann, K. Lingemann, J. Hertzberg, „Semantic scene analysis of scanned 3d indoor environments”, in:Proc. 8th Int. Fall Workshop on Vision, Modeling and Visualization, 2003. DOI http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.11.2503.
  • 6. H. Cantzler, R. B. Fisher, and M. Devy, „Quality enhancement of reconstructed 3D models using coplanarity and constraints”, Proc. Symposium Pattern Recogn., Springer-Verlag, London, pp. 34-41, 2002.
  • 7. M. A. Fischler and R. Bolles, „Random sample consensus a paradigm for model fitting with applications to image analysis and automated cartography”, Proc. Image Understanding Workshop, edited by L. S. Baurnann, Science Applications, McLean, 71-88, 1980.
  • 8. M. Eich, M. Dabrowska and F. Kirchner, „Semantic labeling: Classification of 3d entities based on spatial feature descriptors”, in: IEEE Int. Conf. on Robotics and Automation, Anchorage, 2010. DOI http://www.best-of-robotics.org/pages/BRICS-events/icra2010/SemanticLabeling2010Eich. pdf.
  • 9. N. Vaskevicius, A. Birk, K. Pathak, and J. Poppinga, „Fast detection of polygons in 3D point clouds from noise-prone range sensors”, IEEE Int. Workshop Safety, Security and Rescue Robotics, pp. 1-6 Rome, 2007.
  • 10. H. Andreasson, R. Triebel, and W. Burgard, „Improving plane extraction from 3D data by fusing laser data and vision”, Int. Symp. on Experimental Robotics, pp. 2656–2661, Edmonton, 2005.
  • 11. J. Oberlander, K. Uhl, J. M. Zollner, and R. Dillmann, „A region-based slam algorithm capturing metric, topological, and semantic properties”, IEEE Int. Conf. Robotics and Automation, pp. 1886–189, Pasadena, 2008.
  • 12. S. Se and D. L. J. Little, „Mobile robot localization and mapping with uncertainty using scale-invariant visual land-marks”, Int. J. Robot. Res. 21, 735-758 (2002).
  • 13. A. Davison, Y. G. Cid, N. Kita, „Real-time 3D SLAM with wide-angle vision”, in: Proc. IFAC Symposium on Intelligent Autonomous Vehicles, Lisbon, (2004). DOI http://www.robots.ox.ac.uk:5000/ActiveVision/Publications/davison_etal_iav2004/davison_etal_iav2004.pdf.
  • 14. R. O. Castle, G. Klein, and D. W. Murray, „Combining monoSLAM with object recognition for scene augmentation using a wearable camera”. Image Vision Comput. 28, 1548–1556, (2010).
  • 15. B. Williams and I. Reid, „On combining visual slam and visual odometry”, in: Proc. International Conference on Robotics and Automation, (2010). http://www.robots.ox.ac.uk:5000/ActiveVision/Publications/williams_reid_icra2010/williams_reid_icra2010.pdf.
  • 16. H. Andreasson, „Local visual feature based localisation and mapping by mobile robots”, PhD. thesis, Orebro University, School of Science and Technology, 2008.
  • 17. L. Pedraza, G. Dissanayake, J. V. Miro, D. Rodriguez-Losada and F. Matia, „Bs-slam: Shaping the world”, in: Proc. of Robotics: Science and Systems, Atlanta, GA, USA, (2007). DOI http://www.roboticsproceedings.org/rss03/ p22.pdf.
  • 18. S. Thrun, W. Burgard, and D. Fo, „A real-time algorithm for mobile robot mapping with applications to multi-robot and 3d mapping”, Proc. Int. Conf. on Robotics and Automation, pp. 321–328, San Francisco, 2000.
  • 19. M. Magnusson, H. Andreasson, A. Nüchter, and A. J. Lilienthal, „Automatic appearance-based loop detection from 3D laser data using the normal distributions transform”, J. Field Robot. 26, 892–914 (2009).
  • 20. H. Andreasson and A. J. Lilienthal, „Vision aided 3D laser based registration”, Proc. European Conf. on Mobile Robots, pp. 192–197, Freiburg, 2007.
  • 21. P. J. Besl and H. D. Mckay, „A method for registration of 3-d shapes, Pattern Analysis and Machine Intelligence”, IEEE T. Pattern Anal. Machine Intell. - Special issue on interpretation of 3-D Scenes - II, 14, 239–256 (1992).
  • 22. D. Hähnel and W. Burgard, „Probabilistic matching for 3D scan registration”, In: Proc. of the VDI - Conf. Robotik, 2002. DOI http://www.informatik.uni-freiburg.de/~burgard/postscripts/haehnel-robotik2002.pdf.
  • 23. R. B. Rusu, Z. C. Marton, N. Blodow, M. Dolha, and M. Beetz, „Towards 3D point cloud based object maps for household environments”, Robot. Auton. Syst. 56, 927–941, (2008).
  • 24. M. Magnusson, T. Duckett, and A. J. Lilienthal, „3D scan registration for autonomous mining vehicles”, J. Field Robot. 24, 803–827 (2007).
  • 25. A. Nüchter, H. Surmann, and J. Hertzberg, „Automatic model refinement for 3D reconstruction with mobile robots”, in 4th Int. Conf. on 3-D Digital Imaging and Modeling, p. 394, Banff, Canada, 2003.
  • 26. P. Pfaff, R. Triebel, and W. Burgard, „An efficient extension of elevation maps for outdoor terrain mapping”, Proc. Int. Conf. on Field and Service Robotics, pp. 165–176, Port Douglas, Australia, 2005.
  • 27. M. Montemerlo and S. Thrun, „A multi-resolution pyramid for outdoor robot terrain perception”, Proc. of the 19th National Conference on Artificial Intelligence, pp. 464–469, San Jose, 2004.
  • 28. T. Stoyanov and A. J. Lilienthal, „Maximum likelihood point cloud acquisition from a rotating laser scanner on a moving platform”, in: Proc. IEEE Int. Conf. on Advanced Robotics, pp. 1–6, 2009.
  • 29. P. Kohlhepp, P. Pozzo, M. Walther, and R. Dillmann, „Sequential 3D-slam for mobile action planning”, Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Sendai, Japan, pp. 722–729, 2004.
  • 30. M. Walther, P. Steinhaus, and R. Dillmann, „A foveal 3D laser scanner integrating texture into range data”, IEEE Ind. Applic. Soc. pp. 748–755, Valencia, 2006.
  • 31. D. Huber and M. Hebert, „Fully automatic registration of multiple 3D data sets”, Image Vision Comput. 21, 637–650, (2003).
  • 32. A. W. Fitzgibbon, „Robust registration of 2D and 3D point sets”, British Machine Vision Conference, pp. 411–420, Manchester, 2001.
  • 33. M. Magnusson and T. Duckett, „A comparison of 3D registration algorithms for autonomous underground mining vehicles”, Proc. European Conf. on Mobile Robots, pp. 86–91, Ancona, 2005.
  • 34. S.-Y. Park, S.-I. Choi, J. Kim, and J. Chae, „Real-time 3d registration using GPU”, Machine Vision and Applications 1–14 (2010).
  • 35. S.-Y. Park and M. Subbarao, „An accurate and fast point-to-plane registration technique”, Pattern Recogn. Lett. 24, 2967–2976 (2003).
  • 36. A. Nuchter, K. Lingemann, and J. Hertzberg, „Cached k-d tree search for icp algorithms”, 3rd Int. Conf. on 3D Digital Imaging and Modeling, IEEE Computer Society, Washington, pp. 419–426, 2007.
  • 37. S. Rusinkiewicz and M. Levoy, „Efficient variants of the ICP algorithm”, in: Third International Conference on 3D Digital Imaging and Modeling (3DIM), 2001, DOI http://www.cs.princeton.edu/~smr/papers/fasticp/fasticp_paper.pdf.
  • 38. D. Qiu, S. May, and A. Nüchter, „Gpu-accelerated nearest neighbour search for 3D registration”, Proc. of the 7th Int. Conf. on Computer Vision System, Springer-Verlag, Berlin, Heidelberg, pp. 194–203, 2009.
  • 39. S. Arya and D. M. Mount, „Algorithms for fast vector quantization”, Proc. of Data Compression Conference, edited by J. A. Storer and M. Cohn IEEE Press, pp. 381–390, New York, 1993.
  • 40. T. J. Purcell, C. Donner, M. Cammarano, H. W. Jensen, and P. Hanrahan, „Photon mapping on programmable graphics hardware”, Proc. of the ACM SIGGRAPH/EUROGRAPHICS Conference on Graphics Hardware, Eurographics Association, pp. 41–50, Aire-la-Ville, Switzerland, 2003.
  • 41. V. Garcia, E. Debreuv, and M. Barlaud, „Fast k nearest neighbour search using gpu”, IEEE Proc. Cvpr. Workshops, pp. 1–6, 2008.
  • 42. T. Foley and J. Sugerman, „Kd-tree acceleration structures for a gpu raytracer”, Proc. of the ACM SIGGRAPH/EUROGRAPHICS Conf. on Graphics hardware, „ACM, pp. 15–22, New York, 2005.
  • 43. H. W. Jensen, Realistic Image Synthesis Using Photon Mapping, edited by A. K. Peters, Ltd., Natick, USA, 2001.
  • 44. R. B. Rusu, N. Blodow, and M. Beetz, „Fast point feature histograms (fpfh) for 3d registration”, Proc.IEEE Int. Conf. On Robotics and Automation, IEEE Press, pp. 1848–1853, Piscataway, USA, 2009.
  • 45. R. B. Rusu, Z. C. Marton, N. Blodow, and M. Beetz, „Learning informative point classes for the acquisition of object model maps”, Proc. of the 10th Int. Conf. on Control, Automation, Robotics and Vision, pp. 17–20, Hanoi, 2008.
  • 46. R. B. Rusu, N. Blodow, Z. C. Marton, and M. Beetz, „Aligning point cloud views using persistent feature histograms”, Proc. of the 21st IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 22–26, Nice, 2008.
  • 47. R. B. Rusu, Z. C. Marton, N. Blodow, and M. Beetz, „Persistent Point Feature Histograms for 3D Point Clouds”, in: Proc. of the 10th Int. Conf. on Intelligent Autonomous Systems, Baden-Baden, Germany, 2008, DOI http://ias.cs.tum.edu/_media/spezial/bib/rusu08ias.pdf.
  • 48. H. Pottmann, S. Leopoldseder, and M. Hofer, „Registration without icp”, Computer Vision and Image Understanding 95, 54–71, 2002.
  • 49. NVIDIA CUDA C Programming Guide 3.2, http://www.nvidia.com/cuda (2010).
  • 50. CUDA C Best Practices Guide 3.2, http://www.nvidia.com/cuda (2010).
  • 51. http://nicolas.burrus.name/index.php/Research/KinectCalibration (2011).
  • 52. http://www.nvidia.com/cuda (2010).
  • 53. N. Satish, M. Harris and M. Garland, „Designing efficient sorting algorithms for manycore gpus”, NVIDIA Technical Report NVR-2008-001, NVIDIA Corporation (2008).
  • 54. N. Satish, M. Harris, and M. Garland, „Designing efficient sorting algorithms for manycore gpus”, Proc. of the IEEE Int. Symp. on Parallel and Distributed Processing, IEEE Computer Society, pp. 1–10, Washington, 2009.
  • 55. M. Harris, S. Sengupta, and J. D. Owens, „GPU Gems 3”, Parallel Prefix Sum (Scan) with CUDA, edited by Addison-Wesley, 39, pp. 851–876, New York, 2007.
  • 56. D. P. Berrar, W. Dubitzky, and M. Granzow, „A Practical Approach to Microarray Data Analysis, Singular Value Decomposition And Principal Component Analysis”, 1st Edition, Springer Publishing Company, Inc., 5, pp. 1–18, New York, 2009.
  • 57. http://opencv.willowgarage.com/wiki/.
  • 58. S.-W. Lee and Y. J. Kim, „Direct extraction of topographic features for gray scale character recognition”, IEEE T. Pattern Anal. Mach. Intell. 17, 724–729 (1995).
  • 59. L. Wang and T. Pavlidis, „Direct gray-scale extraction of features for character recognition”, IEEE T. Pattern Anal. Mach. Intell. 15, 1053–1067 (1993).
  • 60. A. Nüchter, K. Lingemann, J. Hertzberg, and H. Surmann, „6D SLAM - 3D mapping outdoor environments” J. Field Robot., Special Issue on Quantitative Performance Evaluation of Robotic and Intelligent Systems, edited by Wiley & Son, 24, pp. 699–722, Chichester, 2007.
  • 61. J. Elseberg, D. Borrmann, and A. Nüchter: „Efficient processing of large 3d point clouds”. Proc. of the 23rd Int. Symp.on Information, Communication and Automation Technologies, IEEE Xplore, ISBN 978-1-4577-0746-9, Sarajevo, 2011.
  • 62. A. Nüchter, S. Feyzabadi, D. Qiu, and S. May. „SLAM - la carte - GPGPU for globally consistent scan matching”. In Proc. of the 4th European Conf. on Mobile Robots (ECMR'11), Örebro, Sweden, 2011, DOI http://aass.oru.se/Agora/ECMR2011/proceedings/papers/ECMR2011_0002.pdf.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BWAD-0033-0004
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.