PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

A set of depth sensor processing ROS tools for wheeled mobile robot navigation

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The paper presents a set of soŌware tools dedicated to support mobile robot navigaƟon. The tools are used to process an image from a depth sensor. They are implemented in ROS framework and they are compaƟble with standard ROS navigaƟon packages. The soŌware is released with an open source licence. First of the tools converts a 3D depth image to a 2D scan in polar coordinates. It provides projecƟon of the obstacles, removes the ground plane from the image and compensates sensor Ɵlt angle. The node is faster than the standard node within ROS and it has addiƟonal funcƟons increasing range of possible applicaƟons. The second tool allows detecƟon of negaƟve obstacles i.e. located below the ground plane level. The third tool esƟmates height and orientaƟon of the sensor with RANSAC algorithm applied to the depth image. The paper presents also the results of usage of the tools with mobile plaƞorms equipped with MicrosoŌ Kinect sensors. The plaƞorms are elements of the ReMeDi project within which the soŌware was developed.
Słowa kluczowe
Twórcy
autor
  • Department of Cybernetics and Robotics, Faculty of Electronics, Wrocław University of Technology, ul. Wybrzeże Wyspiań skiego 27, 50-370 Wrocław, Poland
autor
  • Department of Cybernetics and Robotics, Faculty of Electronics,Wrocław University of Technology, ul. Wybrzeże Wyspiań skiego 27, 50-370 Wrocław, Poland
Bibliografia
  • [1] “ReMeDi EU Project”.http://www.remedi-project.eu.
  • [2] Y. ai Bi, J. Li, H. Qin, M. Lan, M. Shan, F. Lin, B. M. Chen, “An MAV localization and mapping system based on dual Realsense cameras”. In: P. Z. PENG, D. F. LIN, eds., International Micro Air Vechicle Competition and Conference 2016, Beijing, PR of China, 2016, 50–55.
  • [3] K. Arent, J. Jakubiak, M. Drwięga, M. Cholewiński, G. Stollnberger, M. Giuliani, M. Tscheligi, D. Szczęśniak-Stań czyk, M. Janowski, W. Brzozowski, A. Wysokiński, “Control of mobile robotfor remote medical examination: Design concepts and users’ feedback from experimental studies”. In: 2016 9th International Conference on Human System Interactions (HSI), 2016, 76–82. DOI: 10.1109/HSI.2016.7529612.
  • [4] K. Berger. A State of the Art Report on Multiple RGB-D Sensor Research and on Publicly Available RGB-D Datasets, 27–44. Springer International Publishing, Cham, 2014.
  • [5] J. Biswas, M. Veloso, “Depth camera based indoor mobile robot localization and navigation”. In: Proceedings of IEEE International Conference on Robotics and Automation, 2012, 1697–1702.
  • [6] K. Bohlmann, A. Beck-Greinwald, S. Buck, H. Marks, A. Zell, “Autonomous person following with 3D LIDAR in outdoor environment”, Journal of Automation, Mobile Robotics and Intelligent Systems, vol. 7, no. 2, 2013, 24–29.
  • [7] D. Borrmann, J. Elseberg, K. Lingemann, A. Nü chter, “The 3D Hough transform for plane detectionin point clouds: A review and a new accumulator design”, 3D Res., 2011, 32:1–32:13. DOI:10.1007/3DRes.02(2011)3.
  • [8] P. Bovbel, T. Foote. “ROS package pointcloud_to_laserscan”. http://wiki.ros.org/pointcloud_to_laserscan.
  • [9] S. Choi, T. Kim, W. Yu, “Performance Evaluation of RANSAC Family”. In: Proceedings of the British Machine Vision Conference, 2009, 81.1–81.12. DOI: 10.5244/C.23.81.
  • [10] M. Drwięga. “Navigation tools”. http://wiki.ros.org/depth_nav_tools.
  • [11] F. Endres, J. Hess, J. Sturm, D. Cremers, W. Burgard, “3-d mapping with an rgb-d camera”, IEEE Transactions on Robotics, vol. 30, no. 1, 2014, 177–187. DOI: 10.1109/TRO.2013.2279412.
  • [12] F. Endres, J. Hess, N. Engelhard, J. Sturm, D. Cremers,W. Burgard, “An evaluation of the RGBD SLAM system”. In: Robotics and Automation (ICRA), 2012 IEEE International Conference on, 2012, 1691–1696.
  • [13] P. Fankhauser, M. Bloesch, D. Rodriguez, R. Kaestner, M. Hutter, R. Siegwart, “Kinect v2 for mobile robot navigation: Evaluation and modeling”. In: 2015 International Conference on Advanced Robotics (ICAR), 2015, 388–394. DOI: 10.1109/ICAR.2015.7251485.
  • [14] M. Fischler, R. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography”, Commun. ACM, vol. 24, no. 6, 1981, 381–395. DOI: 10.1145/358669.358692.
  • [15] J. Han, L. Shao, D. Xu, J. Shotton, “Enhanced computer vision with Microsoft Kinect sensor: A review”, IEEE Transactions on Cybernetics, vol. 43, no. 5, 2013, 1318–1334. DOI:10.1109/TCYB.2013.2265378.
  • [16] J. Jakubiak, M. Drwięga, A. Kurnicki, “Development of a mobile platform for a remote
  • medical teleoperation robot”. In: Proc 21st Int Conf Methods and Models in Automation and Robotics (MMAR), 2016, 1137–1142. DOI:10.1109/MMAR.2016.7575298.
  • [17] J. Jakubiak, M. Drwięga, B. Stańczyk, “Control and perception system for ReMeDi robot mobile platform”. In: Proc 20th Int Conf Methods and Models in Automation and Robotics (MMAR), 2015.
  • [18] A. Kadambi, A. Bhandari, R. Raskar. ”Computer Vision and Machine Learning with RGB-D Sensors”,chapter ”3D Depth Cameras in Vision: Benefits and Limitations of the Hardware”. Springer,2014.
  • [19] K. Kamarudin, S. Mamduh, A. Shakaff, S. Saad, A. Zakaria, A. Abdullah, L. Kamarudin, “Methodto convert Kinect’s 3D depth data to a 2D map forindoor SLAM”. In: IEEE 9th Int Coll Signal Processingand its Applications (CSPA), 2013, 247–251.
  • [20] K. Kamarudin, S. Mamduh, A. Shakaff, A. Zakaria,“Performance analysis of the MicrosoftKinect sensor for 2D simultaneous localization and mapping (SLAM) techniques”, Sensors,vol. 14, no. 12, 2014, 23365–23387. DOI:10.3390/s141223365.
  • [21] D. Kırcalı, F. B. Tek, “Ground plane detection using an RGB-D sensor”. In: L. R. Czachórski T., Gelenbe E., ed., Information Sciences and Systems 2014, Cham, 2014. DOI: 10.1007/978-3-319-09465-6_8.
  • [22] E. Lachat, H. Macher, M.-A. Mittet, T. Landes,P. Grussenmeyer, “First Experiences with Kinect v2 Sensor for Close Range 3d Modelling”, ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2015, 93–100. DOI: 10.5194/isprsarchives-XL5-W4-93-2015.
  • [23] P. Łabęcki, D. Belter, “System calibration method for a walking robot”, Journal of Automation, Mobile Robotics and Intelligent Systems, vol. 7, no. 2, 2013, 39–45.
  • [24] K. Okada, S. Kagami, M. Inaba, H. Inoue, “Plane segment finder: algorithm, implementation and applications”. In: Proc. IEEE Int. Conf. Robotics and Automation, vol. 2, 2001, 2120–2125. DOI:10.1109/ROBOT.2001.932920.
  • [25] A. Oliver, S. Kang, B. Wü nsche, B. MacDonald,“Using the kinect as a navigation sensor for mobile robotics”. In: Proceedingsof the 27th Conference on Image and Vision Computing New Zealand, 2012, 509–514. DOI:10.1145/2425836.2425932.
  • [26] S. Oßwald, J. S. Gutmann, A. Hornung, M. Bennewitz,“From 3D point clouds to climbing stairs: A comparison of plane segmentation approachesfor humanoids”. In: 11th IEEE-RAS Int Conf Humanoid Robots, 2011, 93–98. DOI: 10.1109/Humanoids.2011.6100836.[27] PCL. “Point Cloud Library”.http://pointclouds.org.
  • [28] A. Peer, M. Buss, B. Stańczyk, D. Szczęśniak-Stańczyk, W. Brzozowski, A. Wysokiński, M. Tscheligi, C. A. Avizzano, E. Ruffaldi, L. van Gool, A. Fossati, K. Arent, J. Jakubiak, M. Janiak, “Towards a remote medical diagnostican for medical examination”. In: NextMed MMVR21, 2014.
  • [29] N. Rafibakhsh, J. Gong, M. K. Siddiqui, C. Gordon, H. F. Lee, “Analysis of xbox kinect sensor data for use on construction sites: Depth accuracy and sensor interference assessment”. In: Construction Research Congress, 2012. DOI:10.1061/9780784412329.086.
  • [30] C. Rockey. “ROS package depthimage_to_laserscan”.http://wiki.ros.org/depthimage_to_laserscan.
  • [31] ROS. “Robot Operating System”.http://www.ros.org.
  • [32] T. Wiedemeyer. “IAI Kinect2”. https://github.com/code-iai/iai_kinect2, 2014 – 2015.
  • [33] M. Y. Yang, W. Fö rstner, “Plane detection in point cloud data”. In: Department of Photogrammetry, University of Bonn, 2010.
  • [34] Z. Zhang, “Microsoft Kinect Sensor and Its Effect”,IEEE MultiMedia, 2012, 4–10.
  • [35] S. Zug, F. Penzlin, A. Dietrich, T. T. Nguyen, S. Albert, “Are laser scanners replaceable by kinect sensors in robotic applications?”. In: Robotic and Sensors Environments (ROSE), 2012 IEEE International Symposium on, 2012, 144–149. DOI:10.1109/ROSE.2012.6402619.
Uwagi
PL
Opracowanie ze środków MNiSW w ramach umowy 812/P-DUN/2016 na działalność upowszechniającą naukę (zadania 2017).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-4928a395-986b-4504-82b9-0af4837d456f
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.