Tytuł artykułu
Autorzy
Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
Semantic segmentation of 3D point clouds is an open research problem and remains crucial for autonomous driving, robot navigation, human-computer interaction, 3D reconstruction and many others. The large scale of the data and lack of regular data organization make it a very complex task. Research in this field focuses on point cloud representation (e.g., 2D images, 3D voxels grid, graph) and segmentation techniques. In the paper, state-of-the-art approaches related to these tasks are presented.
Czasopismo
Rocznik
Tom
Strony
99--105
Opis fizyczny
Bibliogr. 15 poz.
Twórcy
autor
- Lodz University of Technology, Institute of Information Technology, Wólczańska 215, 90-924 Lodz, Poland
autor
- Lodz University of Technology, Institute of Information Technology, Wólczańska 215, 90-924 Lodz, Poland
Bibliografia
- [1] Hackel, T., Savinov, N., Ladicky, L., Wegner, J. D., Schindler, K., and Pollefeys, M., SEMANTIC3D.NET: A new large-scale point cloud classification benchmark, In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. IV-1-W1, 2017, pp. 91-98.
- [2] Armeni, I., Sener, O., Zamir, A. R., Jiang, H., Brilakis, I., Fischer, M., and Savarese, S., 3D Semantic Parsing of Large-Scale Indoor Spaces, In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, 2016.
- [3] Boulch, A., Saux, B. L., and Audebert, N., Unstructured Point Cloud Semantic Labeling Using Deep Segmentation Networks, In: Eurographics Workshop on 3D Object Retrieval, edited by I. Pratikakis, F. Dupont, and M. Ovsjanikov, The Eurographics Association, 2017.
- [4] Wang, Y., Shi, T., Yun, P., Tai, L., and Liu, M., PointSeg: Real-Time Semantic Segmentation Based on 3D LiDAR Point Cloud, CoRR, Vol. abs/1807.06288, 2018.
- [5] Tchapmi, L. P., Choy, C. B., Armeni, I., Gwak, J., and Savarese, S., SEGCloud: Semantic Segmentation of 3D Point Clouds, CoRR, Vol. abs/1710.07563, 2017.
- [6] Xu, Y., Hoegner, L., Tuttas, S., and Stilla, U., Voxel- and Graph-Based Point Cloud Segmentation of 3D Scenes Using Perceptual Grouping Laws, ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. IV-1/W1, 05 2017, pp. 43-50.
- [7] Landrieu, L. and Simonovsky, M., Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs, CoRR, Vol. abs/1711.09869, 2017.
- [8] Qi, C. R., Su, H., Mo, K., and Guibas, L. J., PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation, CoRR, Vol. abs/1612.00593, 2016.
- [9] Su, H., Jampani, V., Sun, D., Maji, S., Kalogerakis, E., Yang, M.-H., and Kautz, J., SPLATNet: Sparse Lattice Networks for Point Cloud Processing, In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2530-2539.
- [10] Qi, C. R., Yi, L., Su, H., and Guibas, L. J., PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space, CoRR, Vol. abs/1706.02413, 2017.
- [11] Jiang, M., Wu, Y., and Lu, C., PointSIFT: A SIFT-like Network Module for 3D Point Cloud Semantic Segmentation, CoRR, Vol. abs/1807.00652, 2018.
- [12] Ye, X., Li, J., Huang, H., Du, L., and Zhang, X., 3D Recurrent Neural Networks with Context Fusion for Point Cloud Semantic Segmentation, In: The European Conference on Computer Vision (ECCV), September 2018.
- [13] Iandola, F. N., Moskewicz, M. W., Ashraf, K., Han, S., Dally, W. J., and Keutzer, K., SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size, CoRR, Vol. abs/1602.07360, 2016.
- [14] Zhang, C., Luo, W., and Urtasun, R., Efficient Convolutions for Real-Time Semantic Segmentation of 3D Point Clouds, In: 2018 International Conference on 3D Vision, 3DV 2018, Verona, Italy, September 5-8, 2018, 2018, pp. 399-408.
- [15] Lowe, D. G., Distinctive Image Features from Scale-Invariant Keypoints, Int. J. Comput. Vision, Vol. 60, No. 2, nov 2004, pp. 91-110.
Uwagi
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-0692201e-886c-4f8b-9d97-216c5621404c