PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Automated motion heatmap generation for Bridge Navigation Watch Monitoring System

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Most ship collisions and grounding accidents are due to errors made by watchkeeping personnel (WP) on the bridge. International Maritime Organization (IMO) adopts the resolution on the Bridge Navigation Watch Alarm System (BNWAS) detecting operator disability to avert these accidents. The defined system in the resolution is very basic and vulnerable to abuse. There is a need for a more advanced system of monitoring the behaviour of WP to mitigate watchkeeping errors. In this research, a Bridge Navigation Watch Monitoring System (BNWMS) is suggested to achieve this task. Architecture is proposed to train a model for BNWMS. The literature reveals that vision-based sensors can produce relevant input data required for model training. 2D body poses belonging to the same person are estimated from multiple camera views by using a deep learning-based pose estimation algorithm. Estimated 2D poses are projected into 3D space with a maximum 8 mm error by utilising multiple view computer vision techniques. Finally, the obtained 3D poses are plotted on a bird’s-eye view bridge plan to calculate a heatmap of body motions capturing temporal, as well as spatial, information. The results show that motion heatmaps present significant information about the behaviour of WP within a defined time interval. This automated motion heatmap generation is a novel approach that provides input data for the suggested BNWMS.
Rocznik
Tom
Strony
63--75
Opis fizyczny
Bibliogr. 36 poz., rys., tab.
Twórcy
  • Istanbul Technical University Tuzla, 34940 Istanbul Turkey
autor
  • Istanbul Technical University Tuzla, 34940 Istanbul Turkey
autor
  • Gebze Technical University Gebze, 41400 Kocaeli Turkey
Bibliografia
  • 1. W. Qiao, Y. Liu, X. Ma, and Y. Liu, “A methodology to evaluate human factors contributed to maritime accident by mapping fuzzy FT into ANN based on HFACS,” Ocean Eng., vol. 197, p. 106892, 2020.
  • 2. S. Fan, J. Zhang, E. Blanco-Davis, Z. Yang, and X. Yan, “Maritime accident prevention strategy formulation from a human factor perspective using Bayesian Networks and TOPSIS,” Ocean Eng., vol. 210, p. 107544, 2020.
  • 3. K. Kulkarni, F. Goerlandt, J. Li, O. V. Banda, and P. Kujala, “Preventing shipping accidents: Past, present, and future of waterway risk management with Baltic Sea focus,” Saf. Sci., vol. 129, p. 104798, 2020.
  • 4. V. Laine, F. Goerlandt, O. V. Banda, M. Baldauf, Y. Koldenhof, and J. Rytkönen, “A risk management framework for maritime Pollution Preparedness and Response: Concepts, processes and tools,” Mar. Pollut. Bull., vol. 171, p. 112724, 2021, doi: https://doi.org/10.1016/j.marpolbul.2021.112724.
  • 5. AGCS, “Safety and Shipping Review 2021,” Allianz Global Corporate and Speciality, 2021. https://www.agcs.allianz. com/content/dam/onemarketing/agcs/agcs/reports/AGCSSafety-Shipping-Review-2021.pdf (accessed Sep. 09, 2021).
  • 6. Y. Zhang, X. Sun, J. Chen, and C. Cheng, “Spatial patterns and characteristics of global maritime accidents,” Reliab. Eng. Syst. Saf., vol. 206, p. 107310, 2021.
  • 7. K. Liu, Q. Yu, Z. Yuan, Z. Yang, and Y. Shu, “A systematic analysis for maritime accidents causation in Chinese coastal waters using machine learning approaches,” Ocean Coast. Manag., vol. 213, p. 105859, 2021.
  • 8. A. Graziano, A. P. Teixeira, and C. G. Soares, “Classification of human errors in grounding and collision accidents using the TRACEr taxonomy,” Saf. Sci., vol. 86, pp. 245–257, 2016.
  • 9. IMO, STCW including 2010 Manila Amendments (ID938E), 2017th ed. London: International Maritime Organization, 2017.
  • 10. M. Bull, Bridge Watchkeeping: A Practical Guide – 3rd Edition. The Nautical Institute, 2021.
  • 11. M. Kaptan, Ö. Uğurlu, and J. Wang, “The effect of nonconformities encountered in the use of technology on the occurrence of collision, contact and grounding accidents,” Reliab. Eng. Syst. Saf., vol. 215, p. 107886, 2021.
  • 12. IMO, IMO RESOLUTION MSC.128(75), Performance Standards for a Bridge Navigational Watch Alarm System (BNWAS), no. May. International Maritime Organization, 2002.
  • 13. H. F. Nweke, Y. W. Teh, M. A. Al-Garadi, and U. R. Alo, “Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges,” Expert Syst. Appl., 2018.
  • 14. L. Onofri, P. Soda, M. Pechenizkiy, and G. Iannello, “A survey on using domain and contextual knowledge for human activity recognition in video streams,” Expert Syst. Appl., vol. 63, pp. 97–111, 2016.
  • 15. S. Bhattacharya and N. D. Lane, “From smart to deep: Robust activity recognition on smartwatches using deep learning,” in Pervasive Computing and Communication Workshops (PerCom Workshops), 2016 IEEE International Conference on, 2016, pp. 1–6.
  • 16. Y. Jia, X. Song, J. Zhou, L. Liu, L. Nie, and D. S. Rosenblum, “Fusing Social Networks with Deep Learning for Volunteerism Tendency Prediction,” in AAAI, 2016, pp. 165–171.
  • 17. A. Jalal, Y.-H. Kim, Y.-J. Kim, S. Kamal, and D. Kim, “Robust human activity recognition from depth video using spatiotemporal multi-fused features,” Pattern Recognit., vol. 61, pp. 295–308, 2017.
  • 18. Y. Fan, J. C. K. Lam, and V. O. K. Li, “Video-based Emotion Recognition Using Deeply-Supervised Neural Networks,” in Proceedings of the 2018 on International Conference on Multimodal Interaction, 2018, pp. 584–588.
  • 19. N. Neverova, C. Wolf, G. W. Taylor, and F. Nebout, “Multiscale deep learning for gesture detection and localization,” in Workshop at the European conference on computer vision, 2014, pp. 474–490.
  • 20. W. Zhang, Y. L. Murphey, T. Wang, and Q. Xu, “Driver yawning detection based on deep convolutional neural learning and robust nose tracking,” in Neural Networks (IJCNN), 2015 International Joint Conference on, 2015, pp. 1–8.
  • 21. Y.-J. Han, W. Kim, and J.-S. Park, “Efficient Eye-Blinking Detection on Smartphones: A Hybrid Approach Based on Deep Learning,” Mob. Inf. Syst., vol. 2018, 2018.
  • 22. Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh, “OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 1, pp. 172–186, 2019.
  • 23. D. Wu, N. Sharma, and M. Blumenstein, “Recent advances in video-based human action recognition using deep learning: a review,” in Neural Networks (IJCNN), 2017 International Joint Conference on, 2017, pp. 2865–2872.
  • 24. R. Tsai, “A versatile camera calibration technique for highaccuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robot. Autom., vol. 3, no. 4, pp. 323–344, 1987.
  • 25. P. F. Sturm and S. J. Maybank, “On plane-based camera calibration: A general algorithm, singularities, applications,” in Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on, 1999, vol. 1, pp. 432–437.
  • 26. R. Hartley and A. Zisserman, Multiple view geometry in computer vision. Cambridge university press, 2003.
  • 27. J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on, 1997, pp. 1106–1112.
  • 28. J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 10, pp. 1066–1077, 2000.
  • 29. H.-S. Fang, S. Xie, Y.-W. Tai, and C. Lu, “Rmpe: Regional multi-person pose estimation,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2334–2343.
  • 30. K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969.
  • 31. J. Wang et al., “Deep high-resolution representation learning for visual recognition,” IEEE Trans. Pattern Anal. Mach. Intell., 2020.
  • 32. L. Pishchulin et al., “Deepcut: Joint subset partition and labeling for multi person pose estimation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4929–4937.
  • 33. H. Hirschmuller and S. Gehrig, “Stereo matching in the presence of sub-pixel calibration errors,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 437–444.
  • 34. T. Yang, Q. Zhao, X. Wang, and Q. Zhou, “Sub-Pixel Chessboard Corner Localization for Camera Calibration and Pose Estimation,” Applied Sciences , vol. 8, no. 11. 2018, doi: 10.3390/app8112118.
  • 35. M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele, “2d human pose estimation: New benchmark and state of the art analysis,” in Proceedings of the IEEE Conference on computer Vision and Pattern Recognition, 2014, pp. 3686–3693.
  • 36. M. Kocabas, S. Karagoz, and E. Akbas, “Multiposenet: Fast multi-person pose estimation using pose residual network,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 417–433.
Uwagi
Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu „Społeczna odpowiedzialność nauki” - moduł: Popularyzacja nauki i promocja sportu (2022-2023).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-27aa720a-b9a4-4e43-8cf0-8c520bf0c44f
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.