Powiadomienia systemowe
- Sesja wygasła!
Tytuł artykułu
Treść / Zawartość
Pełne teksty:
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
This article discusses the use of a deep learning neural network (DLNN) as a tool to improve maritime safety by classifying the potential threat to shipping posed by unexploded ordnance (UXO) objects. Unexploded ordnance poses a huge threat to maritime users, which is why navies and non-governmental organisations (NGOs) around the world are using dedicated advanced technologies to counter this threat. The measures taken by navies include mine countermeasure units (MCMVs) and mine-hunting technology, which relies on the use of sonar imagery to detect and classify dangerous objects. The modern mine-hunting technique is generally divided into three stages: detection and classification, identification, and neutralisation/disposal. The detection and classification stage is usually carried out using sonar mounted on the hull of a ship or on an underwater vehicle. There is now a strong trend to intensify the use of more advanced technologies, such as synthetic aperture sonar (SAS) for high-resolution data collection. Once the sonar data has been collected, military personnel examine the images of the seabed to detect targets and classify tchem as mine-like objects (MILCO) or non mine-like objects (NON-MILCO). Computer-aided detection (CAD), computeraided classification (CAC) and automatic target recognition (ATR) algorithms have been introduced to reduce the burden on the technical operator and reduce post-mission analysis time. This article describes a target classification solution using a DLNN-based approach that can significantly reduce the time required for post-mission data analysis during underwater reconnaissance operations.
Czasopismo
Rocznik
Tom
Strony
77--84
Opis fizyczny
Bibliogr. 30 poz., rys., tab.
Twórcy
autor
- Polish Naval Academy, Faculty of Navigation and Naval Weapons, Poland
autor
- Air Force Institute of Technology, Aircraft Composite Structures Division, Poland
autor
- Polish Naval Academy, Faculty of Mechanical and Electrical Engineering, Poland
autor
- Polish Naval Academy, Faculty of Mechanical and Electrical Engineering, Poland
Bibliografia
- 1. D. Ciresan, U. Meier, J. Masci, and J. Schmidhuber, „Multicolumn deep neural network for traffic sign classification. Neural Networks”, in The International Joint Conference on Neural Network, IDSIA-USI-SUPSI| Galleria, 2012, doi:10.1016/j.neunet.2012.02.023.
- 2. Y. Zhao, M. Qi, X. Li, Y. Meng, Y. Yu, and Y. Dong, „P-LPN: Towards real time pedestrian location perception in complex driving scenes”, IEEE Access, t. 8, s. 54730–54740, 2020, doi:10.1109/ACCESS.2020.2981821.
- 3. E. Byvatov, U. Fechner, J. Sadowski, and G. Schneider, „Comparison of support vector machine and artificial neural network systems for drug/nondrug classification”, J. Chem. Inf. Comput. Sci., t. 43, nr 6, s. 1882–1889, 2003, doi:10.1021/ci0341161.
- 4. S. Lu, Z. Lu, and Y. Zhang, „Pathological brain detection based on AlexNet and transfer learning”, J. Comput. Sci., t. 30, s. 41–47, 2019, doi:10.1016/j.jocs.2018.11.008.
- 5. O. Midtgaard; R.E. Hansen; P.E. Hagen; and N. Storkersen. “Imaging sensors for autonomous underwater Vehicles in military operations”. Proc.SET-169 Military Sensors Symposium. Friedrichshafen, Germany, May 2011.
- 6. HELCOM CHEMU, “Report to the 16th Meeting of Helsinki Commission 8-11 March 1994 from the Ad Hoc Working Group on Dumped Chemical Munition”, Danish Environ. Protec. Agency, 1994.
- 7. J. Fabisiak and A. Olejnik, „Amunicja chemiczna zatopiona w morzu bałtyckim - poszukiwania i ocena ryzyka-projekt badawczy CHEMSEA (Chemical munitions dumped in the Baltic Sea - search and risk assessment- CHEMSEA research project)”, Pol. Hyperb. Res., s. 25–52, 2012.
- 8. “Sea mines Ukraine waters Russia war Black Sea,” The Guardian, 2022. [Online]. Available: www.theguardian.com/world/2022/jul/11/sea-mines-ukraine-waters-russiawar-black-sea. [Accessed: June 21, 2023].
- 9. “Pretrained Convolutional Neural Networks,” MathWorks, 2023. [Online]. Available: https://uk.mathworks.com/help/deeplearning/ug/pretrained-convolutional-neuralnetworks. html. [Accessed: June 21, 2023].
- 10. M. Chodnicki; P. Krogulec; M. Żokowski; and N. Sigiel. „Procedures concerning preparations of autonomous underwater systems to operation focused on detection, classification and identification of mine like objects and ammunition”, J. KONBiN, t. 48, nr 1, s. 149–168, 2018, DOI: 10.2478/jok-2018-0051.
- 11. Dowodztwo Marynarki Wojennej, „Album Min Morskich”(Naval Command, „Sea Mines Album). Gdynia, Polska: MAR. WOJ., Sep 1947.
- 12. “Image Colorization Using Generative Adversarial Networks,” Pinterest, 2023. [Online]. Available: https://www.pinterest.co.uk/pin/145944844154595254/. [Accessed: Sep. 14, 2023].
- 13. “SNMCMG1 Photos,” Facebook, 2023. [Online]. Available: https://www.facebook.com/snmcmg1/photos/a.464547430274739/2304079142988216/. [Accessed: Oct. 9, 2023].
- 14. “Pretrained Convolutional Neural Networks,” MathWorks, 2023. [Online]. Available: https://www.mathworks. com/help/deeplearning/ug/pretrained-convolutionalneural-networks.html?searchHighlight=pretrained%20neural%20networks&s_tid=srchtitle_support_results_1_pretrained%2520neural%2520networks. [Accessed: Sep.14, 2023].
- 15. P. Szymak, P. Piskur, and K. Naus, „The effectiveness of using a pretrained deep learning neural networks for object classification in underwater video”, Remote Sens., t. 12, nr 18, s. 3020, 2020, DOI:10.3390/rs12183020.
- 16. “NATO forces clear mines from the Baltic in Open Spirit operation,” NATO, 2021. [Online]. Available: ht tps://mc.nato.int/media-cent re/news/2021/ nato-forces-clear-mines-from-the-baltic-in-open-spiritoperation. [Accessed: Sep. 14, 2023].
- 17. F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, „SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size”, ArXiv Prepr. ArXiv160207360, 2016.
- 18. Z. Cui, C. Tang, Z. Cao, and N. Liu, „D-ATR for SAR images based on deep neural networks”, Remote Sens., t. 11, nr 8, s. 906, 2019, doi:10.3390/rs11080906.
- 19. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, „Rethinking the inception architecture for computer vision”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, s. 2818–2826, doi:10.1109/CVPR.2016.308.
- 20. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, „Densely connected convolutional networks”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, s. 4700–4708.
- 21. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, „Mobilenetv2: Inverted residuals and linear bottlenecks”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, s. 4510–4520, doi:10.1109/CVPR.2018.00474.
- 22. K. He, X. Zhang, S. Ren, and J. Sun, „Deep residua learning for image recognition”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, s. 770–778, doi:10.1109/CVPR.2016.90.
- 23. F. Chollet, „Xception: Deep learning with depthwise separable convolutions”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, s. 1251–1258, doi:10.1109/CVPR.2017.195.
- 24. K. Nazeri, E. Ng, and M. Ebrahimi, „Image colorization using generative adversarial networks”, in Articulated Motion and Deformable Objects: 10th International Conference, AMDO 2018, Palma de Mallorca, Spain, July 12-13, 2018, Proceedings 10, Springer, 2018, s. 85–94, doi: 10.1007/978-3-319-94544-6_9.
- 25. X. Zhang, X. Zhou, M. Lin, and J. Sun, „Shufflenet: An extremely efficient convolutional neural network for mobile devices”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, s. 6848–6856, doi:10.1109/CVPR.2018.00716.
- 26. B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, „Learning transferable architectures for scalable image recognition”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, s. 8697–8710, doi:10.1109/CVPR.2018.00907.
- 27. O. Russakovsky; J. Deng; H. Su; J. Krause; S. Satheesh; S. Ma; Z. Huang; A. Karpathy; A. Khosla; M. Bernstein et al. „Imagenet large scale visual recognition challenge”. Int. J. Comput. Vis., t. 115, s. 211–252, 2015, doi:10.1007/s11263-015-0816-y.
- 28. K. Simonyan and A. Zisserman, „Very deep convolutional networks for large-scale image recognition”, ArXiv Prepr. ArXiv14091556, 2014.
- 29. W. Wu, L. Guo, H. Gao, Z. You, Y. Liu, and Z. Chen, „YOLO-SLAM: A semantic SLAM system towards dynamic environment with geometric constraint”, Neural Comput. Appl., s. 1–16, 2022, doi:10.1007/s00521-021-06764-3.
- 30. U. Atila, M. Ucar, K. Akyol, and E. Ucar, „Plant leaf disease classification using EfficientNet deep learning model”, Ecol. Inform., t. 61, s. 101182, 2021, doi:10.1016/j.ecoinf.2020.101182.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr POPUL/SP/0154/2024/02 w ramach programu "Społeczna odpowiedzialność nauki II" - moduł: Popularyzacja nauki i promocja sportu (2025).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-68b93911-8e72-4751-bcbf-70b57dc72688
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.