PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Estimating the distance to an object from grayscale stereo images using deep learning

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
This article presents an innovative proposal for estimating the distance between an autonomous vehicle and an object in front of it. Such information can be used, for example, to support the process of controlling an autonomous vehicle. The primary source of information in research is monochrome stereo images. The images were made in compliance with the laws of the canonical order. The developed convolutional neural network model was used for the estimation. A proprietary dataset was developed for the experiments. The analysis was based on the phenomenon of disparity in stereo images. As a result of the research, a correctly trained model of the CNN network was obtained in six variants. High accuracy of distance estimation was achieved. This publication describes an original proposal for a hybrid blend of digital image analysis, stereo-vision, and deep learning for engineering applications.
Rocznik
Strony
60--72
Opis fizyczny
Bibliogr. 27 poz., rys., tab.
Twórcy
  • Department of Computer Science, Czestochowa University of Technology Czestochowa, Poland
Bibliografia
  • [1] Boulares, M., & Barnawi, A. (2021). A novel UAV path planning algorithm to search for floating objects on the ocean surface based on object’s trajectory prediction by regression. Robotics and Autonomous Systems, 135, 103673.
  • [2] Yang, F., Qiao, Y., Wei, W., Wang, X., Wan, D., Damaševičius, R., & Woźniak, M. (2020). Ddtree: A hybrid deep learning model for real-time waterway depth prediction and smart navigation. Applied Sciences, 10(8), 2770.
  • [3] Wang, P., Gao, S., Li, L., Sun, B., & Cheng, S. (2019). Obstacle avoidance path planning design for autonomous driving vehicles based on an improved artificial potential field algorithm. Energies, 12(12), 2342.
  • [4] Yu, H., & Kong, L. (2018). Autonomous Mobile Robot Based on Differential Global Positioning System. In Proceedings of the 2018 IEEE International Conference on Mechatronics and Automation (ICMA). Changchun, China, 5-8 August 2018; 392-396.
  • [5] Kumar, D., Malhotra, R.,& Sharma, S.R. (2020). Design and construction of a smart wheelchair. Procedia Computer Science, 172, 302-307.
  • [6] Gao, H., Cheng, B., Wang, J., Li, L., Zhao, J., & Li, D. (2018). Object classification using CNN-based fusion of vision and LIDAR in autonomous vehicle environment. IEEE Transactions on Industrial Informatics, 14(9), 4224-4231.
  • [7] Lee, D., & Cha, D. (2020). Path optimization of a single surveillance drone based on reinforcement learning. International Journal of Mechanical Engineering and Robotics Research, 9, 12.
  • [8] Jones, E., Sofonia, J., Canales, C., Hrabar, S., & Kendoul, F. (2020). Applications for the hovermap autonomous drone system in underground mining operations. Journal of the Southern African Institute of Mining and Metallurgy, 120, 49-56.
  • [9] Oh, D., & Han, J. (2020). Fisheye-based smart control system for autonomous UAV operation. Sensors, 20(24), 7321.
  • [10] Teixeira, M.A.S., Neves-Jr, F., Koubâa, A., De Arruda, L.V.R., & De Oliveira, A.S. (2020). A quadral-fuzzy control approach to flight formation by a fleet of unmanned aerial vehicles. IEEE Access, 8, 64366-64381.
  • [11] Ort, T., Gilitschenski, I., & Rus, D. (2020). Autonomous navigation in inclement weather based on a localizing ground penetrating radar. IEEE Robotics and Automation Letters, 5(2), 3267-3274.
  • [12] Tang, L., Shi, Y., He, Q., Sadek, A., & Qiao, C. (2020). Performance test of autonomous vehicle lidar sensors under different weather conditions. Transportation Research Record, 2674(1), 319-329.
  • [13] Zhao, X., Sun, P., Xu, Z., Min, H., & Yu, H. (2020). Fusion of 3D LIDAR and camera data for object detection in autonomous vehicle applications. IEEE Sensors Journal, 20(9), 4901-4913.
  • [14] Varun, G.M., Sunil, J., Saira, J., Paramjit, S., Mohammad, R.K., & Fadi, A.-T. (2022). An IoT- enabled intelligent automobile system for smart cities. Internet of Things, 18, 100213.
  • [15] Chen, Y., Wu, Y., & Xing, H. (2017). A complete solution for AGV SLAM integrated with navigation in modern warehouse environment. In Proceedings of the Chinese Automation Congress (CAC). Jinan, China, 20-22 October 2017; IEEE: Hoboken, NJ, USA, 6418-6423.
  • [16] Han, D., Nie, H., Chen, J., & Chen, M. (2018). Dynamic obstacle avoidance for manipulators using distance calculation and discrete detection. Robotics and Computer-Integrated Manufacturing, 49, 98-104.
  • [17] Diwan, H. (2019). Development of an obstacle detection and navigation system for autonomous powered wheelchairs. University of Ontario Institute of Technology (Canada).
  • [18] Domínguez-Morales, M.J., Jiménez-Fernández, A., Jiménez-Moreno, G., Conde, C., Cabello, E., & Linares-Barranco, A. (2019). Bio-inspired stereo vision calibration for dynamic vision sensors. IEEE Access, 7, 138415-138425.
  • [19] Wang, F., L ̈u, E., Wang, Y., Qiu, G., & Lu, H. (2020). Efficient stereo visual simultaneous localization and mapping for an autonomous unmanned forklift in an unstructured warehouse. Applied Sciences, 10(2), 698.
  • [20] Li, P., Chen, X., & Shen, S. (2019). Stereo r-cnn based 3d object detection for autonomous driving. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,7644-7652.
  • [21] Rzeszotarski, D., & Wiecek, B. (2008). Calibration for 3D reconstruction of thermal images. In Proceedings of 9th International Conference on Quantitative InfraRed Thermography (QIRT). Krakow, Poland, 2-5 July 2008; 563-566.
  • [22] Eickenberg, M., Gramfort, A., Varoquaux, G., & Thirion, B. (2017). Seeing it all: Convolutional network layers map the function of the human visual system. NeuroImage, 152, 184-194.
  • [23] Wenzel, M., Milletari, F., Kr ̈uger, J., Lange, C., Schenk, M., Apostolova, I., Klutmann, S., Ehrenburg, M., & Buchert, R. (2019). Automatic classification of dopamine transporter SPECT: deep convolutional neural networks can be trained to be robust with respect to variable image characteristics. European Journal of Nuclear Medicine and Molecular Imaging, 46(13), 2800-2811.
  • [24] Kamsing, P., Torteeka, P., Boonpook, W., & Cao, C. (2020). Deep neural learning adaptive sequential Monte Carlo for automatic image and speech recognition. Applied Computational Intelligence and Soft Computing, 8866259.
  • [25] Woźniak, M., Siłka, J., & Wieczorek, M. (2021). Deep neural network correlation learning mechanism for CT brain tumor detection. Neural Computing and Applications, 1-16.
  • [26] Kulawik, J., & Kubanek, M. (2021). Detection of false synchronization of stereo image transmission using a convolutional neural network. Symmetry, 13(1), 78.
  • [27] 1994-2021 The MathWorks, Inc. (2021) MATLAB Documentation; The MathWorks, Inc.: Natick, MA, USA.
Uwagi
Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2022-2023).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-a454039a-5192-48bb-9404-ce41617e6e18
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.