PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

A novel hybrid deep learning approach for 3D object detection and tracking in autonomous driving

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Recently Object detection and tracking using fusion of LiDAR and RGB camera for the autonomous vehicle environment is a challenging task. The existing works initiates several object detection and tracking frameworks using Artificial Intelligence (AI) algorithms. However, they were limited with high false positives and computation time issues thus lacking the performance of autonomous driving environment. The existing issues are resolved by proposing Hybrid Deep Learning based Multi Object Detection and Tracking (HDL-MODT) using sensor fusion methods. The proposed work performs fusion of solid state LiDAR, Pseudo LiDAR, and RGB camera for improving detection and tracking quality. At first, the multi-stage preprocessing is done in which noise removal is performed using Adaptive Fuzzy Filter (A-Fuzzy). The pre-processed fused image is then provided for instance segmentation to reduce the classification and tracking complexity. For that, the proposed work adopts Lightweight General Adversarial Networks (LGAN). The segmented image is provided for object detection and tracking using HDL. For reducing the complexity, the proposed work utilized VGG-16 for feature extraction which forms the feature vectors. The features vectors are then provided for object detection using YOLOv4. Finally, the detected objects were tracked using Improved Unscented Kalman Filter (IUKF) and mapping the vehicles using time based mapping by considering their RFID, velocity, location, dimension and unique ID. The simulation of the proposed work is carried out using MATLAB R2020a simulation tool and performance of the proposed work is compared with several metrics that show that the proposed work outperforms than the existing works.
Wydawca
Czasopismo
Rocznik
Tom
Strony
435--467
Opis fizyczny
Bibliogr. 32 poz., rys., tab., wykr.
Twórcy
autor
  • Nehru Memorial College (Affiliated to Bharathidasan University), Department of Computer Science, Tiruchirapalli 621007, India
autor
  • Nehru Memorial College (Affiliated to Bharathidasan University), Department of Computer Science, Tiruchirapalli 621007, India
Bibliografia
  • [1] Bai J., Li S., Huang L., Chen H.: Robust detection and tracking method for moving object based on radar and camera data fusion, IEEE Sensors Journal, vol. 21(9), pp. 10761–10774, 2021. doi: 10.1109/jsen.2021.3049449.
  • [2] Bashar M., Islam S., Hussain K.K., Hasan M.B., Ashikur Rahman A.B.M., Kabir M.H.: Multiple object tracking in recent times: A literature review, arXiv preprint arXiv:220904796, 2022. doi: 10.48550/arXiv.2209.04796.
  • [3] Bescos B., Campos C., Tard´os J.D., Neira J.: DynaSLAM II: Tightly-coupled multi-object tracking and SLAM, IEEE Robotics and Automation Letters, vol. 6(3), pp. 5191–5198, 2021. doi: 10.1109/lra.2021.3068640.
  • [4] Chiu H.K., Li J., Ambru¸s R., Bohg J.: Probabilistic 3D multi-modal, multiobject tracking for autonomous driving. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 14227–14233, IEEE, 2021. doi: 10.1109/ icra48506.2021.9561754.
  • [5] Choi H., Jeong J., Choi J.Y.: Rotation-Aware 3D Vehicle Detection from Point Cloud, IEEE Access, vol. 9, pp. 99276–99286, 2021. doi: 10.1109/ access.2021.3095525.
  • [6] Fan Y.C., Yelamandala C.M., Chen T.W., Huang C.J.: Real-Time Object Detection for LiDAR Based on LS-R-YOLOv4 Neural Network, Journal of Sensors, vol. 2021, pp. 1–11, 2021. doi: 10.1155/2021/5576262.
  • [7] Farag W.: Kalman-filter-based sensor fusion applied to road-objects detection and tracking for autonomous vehicles, Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, vol. 235(7), pp. 1125–1138, 2021. doi: 10.1177/0959651820975523.
  • [8] Huang C., He T., Ren H., Wang W., Lin B., Cai D.: OBMO: One bounding box multiple objects for monocular 3D object detection, IEEE Transactions on Image Processing, vol. 32, pp. 6570–6581, 2023. doi: 10.1109/tip.2023.3333225.
  • [9] Jiang P., Ergu D., Liu F., Cai Y., Ma B.: A Review of Yolo algorithm developments, Procedia Computer Science, vol. 199, pp. 1066–1073, 2022. doi: 10.1016/ j.procs.2022.01.135.
  • [10] Kim A., Oˇsep A., Leal-Taix´e L.: EagerMOT: 3D Multi-Object Tracking via Sensor Fusion, CoRR, vol. abs/2104.146822104.14682, 2021. doi: 10.1109/ icra48506.2021.9562072. 2104.14682.
  • [11] KITTI DataSet, https://universe.roboflow.com/sebastian-krauss/kitti-9amcz/ DATASET/2.
  • [12] Koh J., Kim J., Yoo J.H., Kim Y., Kum D., Choi J.W.: Joint 3D object detection and tracking using spatio-temporal representation of camera image and LiDAR point clouds. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 1210–1218, 2022. doi: 10.1609/aaai.v36i1.20007.
  • [13] Lee E., Nam M., Lee H.: Tab2vox: CNN-based multivariate multilevel demand forecasting framework by tabular-to-voxel image conversion, Sustainability, vol. 14(18), 11745, 2022. doi: 10.3390/su141811745.
  • [14] Liu Z., Cai Y., Wang H., Chen L., Gao H., Jia Y., Li Y.: Robust target recognition and tracking of self-driving cars with radar and camera information fusion under severe weather conditions, IEEE Transactions on Intelligent Transportation Systems, vol. 23(7), pp. 6640–6653, 2021. doi: 10.1109/tits.2021.3059674.
  • [15] Luo C., Yang X., Yuille A.: Exploring simple 3D multi-object tracking for autonomous driving. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10488–10497, 2021. doi: 10.1109/iccv48922.2021.01032.
  • [16] Nabati R., Harris L., Qi H.: CFTrack: Center-based radar and camera fusion for 3D multi-object tracking. In: 2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops), pp. 243–248, IEEE, 2021. doi: 10.1109/ ivworkshops54471.2021.9669223.
  • [17] Pal S.K., Pramanik A., Maiti J., Mitra P.: Deep learning in multi-object detection and tracking: state of the art, Applied Intelligence, vol. 51, pp. 6400–6429, 2021. doi: 10.1007/s10489-021-02293-7.
  • [18] Pang Z., Li Z., Wang N.: SimpleTrack: Understanding and rethinking 3D multiobject tracking. In: L. Karlinsky, T. Michaeli, K. Nishino (eds.), Computer Vision – ECCV 2022 Workshops. Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part I, pp. 680–696, Springer, 2022. doi: 10.1007/978-3-031-25056-9 43.
  • [19] Park D., Ambru¸s R., Guizilini V., Li J., Gaidon A.: Is pseudo-lidar needed for monocular 3D object detection? In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3142–3152, 2021. doi: 10.1109/ iccv48922.2021.00313.
  • [20] Premachandra C., Ueda S., Suzuki Y.: Detection and tracking of moving objects at road intersections using a 360-degree camera for driver assistance and automated driving, IEEE Access, vol. 8, pp. 135652–135660, 2020. doi: 10.1109/ access.2020.3011430.
  • [21] Qian R., Lai X., Li X.: 3D object detection for autonomous driving: A survey, Pattern Recognition, vol. 130, 108796, 2022. doi: 10.1016/j.patcog.2022.108796.
  • [22] Shreyas E., Sheth M.H., Mohana: 3D object detection and tracking methods using deep learning for computer vision applications. In: 2021 International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT), pp. 735–738, IEEE, 2021. doi: 10.1109/rteict52294.2021.9573964.
  • [23] Simonelli A., Bul`o S.R., Porzi L., Kontschieder P., Ricci E.: Are we Missing Confidence in Pseudo-LiDAR Methods for Monocular 3D Object Detection? In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3205–3213, 2021. doi: 10.1109/iccv48922.2021.00321.
  • [24] Wang B., Zhu M., Lu Y., Wang J., Gao W., Wei H.: Real-time 3D object detection from point cloud through foreground segmentation, IEEE Access, vol. 9, pp. 84886–84898, 2021. doi: 10.1109/access.2021.3087179.
  • [25] Wang K., Liu M.: YOLOv3-MT: A YOLOv3 using multi-target tracking for vehicle visual detection, Applied Intelligence, vol. 52(2), pp. 2070–2091, 2022. doi: 10.1007/s10489-021-02491-3.
  • [26] Wang S., Sun Y., Liu C., Liu M.: PointTrackNet: An End-to-End Network for 3-D Object Detection and Tracking From Point Clouds, IEEE Robotics and Automation Letters, vol. 5(2), pp. 3206–3212, 2020. doi: 10.1109/lra.2020.2974392.
  • [27] Wang Y., Guizilini V.C., Zhang T., Wang Y., Zhao H., Solomon J.: DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries. In: A. Faust, D. Hsu, G. Neumann (eds.), Conference on Robot Learning, 8–11 November 2021, London, UK, Proceedings of Machine Learning Research, vol. 164, pp. 180–191, PMLR, 2022. https://proceedings.mlr.press/v164/wang22b.html.
  • [28] Wang Y., Wang C., Long P., Gu Y., Li W.: Recent advances in 3D object detection based on RGB-D: A survey, Displays, vol. 70, 102077, 2021. doi: 10.1016/j.displa.2021.102077.
  • [29] Wang Y., Yang B., Hu R., Liang M., Urtasun R.: PLUMENet: Efficient 3D object detection from stereo images. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3383–3390, IEEE, 2021. doi: 10.1109/ iros51168.2021.9635875
  • [30] Wen L.H., Jo K.H.: Fast and accurate 3D object detection for lidar-camera-based autonomous vehicles using one shared voxel-based backbone, IEEE Access, vol. 9, pp. 22080–22089, 2021. doi: 10.1109/access.2021.3055491.
  • [31] Xie X., Cheng G., Wang J., Yao X., Han J.: Oriented R-CNN for object detection. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3520–3529, 2021. doi: 10.1109/iccv48922.2021.00350.
  • [32] Zhao X., Sun P., Xu Z., Min H., Yu H.: Fusion of 3D LIDAR and camera data for object detection in autonomous vehicle applications, IEEE Sensors Journal, vol. 20(9), pp. 4901–4913, 2020. doi: 10.1109/jsen.2020.2966034.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-7fd5d5e2-7842-4085-89ce-2ce1c8d003a5
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.