PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Annotation-free Generation of Training Data Using Mixed Domains for Segmentation of 3D LiDAR Point Clouds

Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Semantic segmentation is important for robots navigating with 3D LiDARs, but the generation of training datasets requires tedious manual effort. In this paper, we introduce a set of strategies to efficiently generate large datasets by combining real and synthetic data samples. More specifically, the method populates recorded empty scenes with navigation-relevant obstacles generated synthetically, thus combining two domains: real life and synthetic. Our approach requires no manual annotation, no detailed knowledge about actual data feature distribution, and no real-life data of objects of interest. We validate the proposed method in the underground parking scenario and compare it with available open-source datasets. The experiments show superiority to the off-the-shelf datasets containing similar data characteristics but also highlight the difficulty of achieving the level of manually annotated datasets. We also show that combining generated and annotated data improves the performance visibly, especially for cases with rare occurrences of objects of interest. Our solution is suitable for direct application in robotic systems.
Rocznik
Strony
347--371
Opis fizyczny
Bibliogr. 28 poz., rys., tab.
Twórcy
autor
  • Warsaw University of Technology, Faculty of Electronics and Information Technology, Warsaw, Poland
  • United Robots Sp. z o.o., Warsaw, Poland
  • United Robots Sp. z o.o., Warsaw, Poland
Bibliografia
  • [1] Alonso I., Riazuelo L., Montesano L., and Murillo A. C. Domain adaptation in lidar semantic segmentation by aligning class distributions, 2021.
  • [2] Arief H. A., Arief M., Zhang G., Liu Z., Bhat M., Indahl U. G., Tveite H., and Zhao D. Sane: smart annotation and evaluation tools for point cloud data. IEEE Access, 8: 131848-131858, 2020.
  • [3] Badrloo S., Varshosaz M., Pirasteh S., and Li J. Image-based obstacle detection methods for the safe navigation of unmanned vehicles: A review. Remote Sensing, 14(15): 3824, 2022.
  • [4] Behley J., Garbade M., Milioto A., Quenzel J., Behnke S., Stachniss C., and Gall J. Semantickitti: A dataset for semantic scene understanding of lidar sequences. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9297-9307, 2019.
  • [5] Chang A. X., Funkhouser T., Guibas L., Hanrahan P., Huang Q., Li Z., Savarese S., Savva M., Song S., Su H., et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv: 1512.03012, 2015.
  • [6] Choi J., Song Y., and Kwak N. Part-aware data augmentation for 3d object detection in point cloud. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3391-3397. IEEE, 2021.
  • [7] Cop K., Sułek B., and Trzciński T. Towards efficient generation of data using mixed domains for segmentation of 3d lidar point clouds. In Mańdziuk J., Żychowski A., and Małkiński M., editors, Progress in Polish Artificial Intelligence Research 5: Proceedings of the 5th Polish Conference on Artificial Intelligence (PP-RAI’2024), pages 280–286, Warsaw, Poland, 2024.
  • [8] CVAT. 3d object annotation. https://docs.cvat.ai/docs/manual/basics/3d-object-annotation/. Accessed: 2024-11-28.
  • [9] Dosovitskiy A., Ros G., Codevilla F., Lopez A., and Koltun V. Carla: An open urban driving simulator. In Conference on robot learning, pages 1-16. PMLR, 2017.
  • [10] Espadinha J., Lebedev I., Lukic L., and Bernardino A. Lidar data noise models and methodology for sim-to-real domain generalization and adaptation in autonomous driving perception. In 2021 IEEE Intelligent Vehicles Symposium (IV), pages 797-803. IEEE, 2021.
  • [11] Fang J., Zhou D., Yan F., Zhao T., Zhang F., Ma Y., Wang L., and Yang R. Augmented lidar simulator for autonomous driving. IEEE Robotics and Automation Letters, 5(2): 1931-1938, 2020.
  • [12] Gao B., Pan Y., Li C., Geng S., and Zhao H. Are we hungry for 3d lidar data for semantic segmentation? a survey of datasets and methods. IEEE Transactions on Intelligent Transportation Systems, 23(7): 6063-6081, 2021.
  • [13] Hesai. Pandar64. https://www.hesaitech.com/product/pandar64/. Accessed: 2024-11-28.
  • [14] Hurl B., Czarnecki K., and Waslander S. Precise synthetic image and lidar (presil) dataset for autonomous vehicle perception. In 2019 IEEE Intelligent Vehicles Symposium (IV), pages 2522-2529. IEEE, 2019.
  • [15] Ibrahim M., Akhtar N., Wise M., and Mian A. Annotation tool and urban dataset for 3d point cloud semantic segmentation. IEEE Access, 9: 35984-35996, 2021.
  • [16] Koenig S. and Howard A. Gazebo: A reliable and versatile robot simulation framework. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2004), pages 2149-2154. IEEE, 2004.
  • [17] Kunchev V., Jain L., Ivancevic V., and Finn A. Path planning and obstacle avoidance for autonomous mobile robots: A review. In Knowledge-Based Intelligent Information and Engineering Systems: 10th International Conference, KES 2006, Bournemouth, UK, October 9-11, 2006. Proceedings, Part II 10, pages 537–544. Springer, 2006.
  • [18] Lai X., Chen Y., Lu F., Liu J., and Jia J. Spherical transformer for lidar-based 3d recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17545-17555, 2023.
  • [19] Manivasagam S., Wang S., Wong K., Zeng W., Sazanovich M., Tan S., Yang B., Ma W.-C., and Urtasun R. Lidarsim: Realistic lidar simulation by leveraging the real world. 2020 ieee. In CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11164-11173, 2020.
  • [20] Meng Q., Wang W., Zhou T., Shen J., Jia Y., and Van Gool L. Towards a weakly supervised framework for 3d point cloud object detection and annotation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(8): 4454-4468, 2021.
  • [21] Segments.ai. Data labelling - 3d point cloud. https://segments.ai/data-labeling/3d-point-cloud/. Accessed: 2024-11-28.
  • [22] UnitedRobots. Ur cleaner. https://unitedrobots.co/ur-cleaner/. Accessed: 2024-11-28.
  • [23] Velodyne. Hdl-64e. https://www.mapix.com/wp-content/uploads/2018/07/63-9194_Rev-J_HDL-64E_S3_Spec-Sheet-Web.pdf. Accessed: 2024-11-28.
  • [24] Velodyne. Vlp-16. https://www.amtechs.co.jp/product/VLP-16-Puck.pdf.
  • [25] Wang C., Ma C., Zhu M., and Yang X. Pointaugmenting: Cross-modal augmentation for 3d object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11794-11803, 2021.
  • [26] Xiao A., Huang J., Guan D., Zhan F., and Lu S. Transfer learning from synthetic to real lidar point cloud for semantic segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 2795-2803, 2022.
  • [27] Xiao P., Shao Z., Hao S., Zhang Z., Chai X., Jiao J., Li Z., Wu J., Sun K., Jiang K., et al. Pandaset: Advanced sensor suite dataset for autonomous driving. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pages 3095-3101. IEEE, 2021.
  • [28] Zhang J., Zhao X., Chen Z., and Lu Z. A review of deep learning-based semantic segmentation for point cloud. IEEE access, 7: 179118-179133, 2019.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-cd494725-bdac-4853-ad84-f966658a66d6
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.