PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

A deep learning method for hard-hat-wearing detection based on head center localization

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
In recent years, a lot of attention has been paid to deep learning methods in the context of vision-based construction site safety systems. However, there is still more to be done to establish the relationship between supervised construction workers and their essential personal protective equipment, like hard hats. A deep learning method combining object detection, head center localization, and simple rule-based reasoning is proposed in this article. In tests, this solution surpassed the previous methods based on the relative bounding box position of different instances and direct detection of hard hat wearers and non-wearers. Achieving MS COCO style overall AP of 67.5% compared to 66.4% and 66.3% achieved by the approaches mentioned above, with class-specific AP for hard hat non-wearers of 64.1% compared to 63.0% and 60.3%. The results show that using deep learning methods with a humanly interpretable rule-based algorithm is better suited for detecting hard hat non-wearers.
Rocznik
Strony
art. no. e147340
Opis fizyczny
Bibliogr. 57 poz., rys., tab.
Twórcy
  • Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, 44-100 Gliwice, Poland
  • Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, 44-100 Gliwice, Poland
  • Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, 44-100 Gliwice, Poland
  • Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, 44-100 Gliwice, Poland
  • Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, 44-100 Gliwice, Poland
  • A. James Clark School of Engineering, University of Maryland, College Park, MD 20742-3021, USA
Bibliografia
  • [1] Eurostat, “Accidents at work – statistics by economic activity,” 2020. [Online]. Available: https://ec.europa.eu/ eurostat/ statistics- explained/ index.php?title=Accidents_at_work_-_statistics_by_economic_activity.
  • [2] USDL, “National Census of Fatal Occupational Injuries in 2019,” pp. 1–9, 2020. [Online]. Available: https://www.bls.gov/news.release/pdf/cfoi.pdf.
  • [3] A. Colantonio, D. McVittie, J. Lewko, and J. Yin, “Traumatic brain injuries in the construction industry,” Brain Inj., vol. 23, no. 11, pp. 873–878, 2009, doi: 10.1080/02699050903036033.
  • [4] S. Konda, H.M. Tiesman, and A.A. Reichard, “Fatal traumatic brain injuries in the construction industry, 2003-2010,” Am. J. Ind. Med., vol. 59, no. 3, pp. 212–220, 2016, doi: 10.1002/ajim.22557.
  • [5] C.A. Taylor, J.M. Bell, M.J. Breiding, and L. Xu, “Traumatic brain injury-related emergency department visits, hospitalizations, and deaths - United States, 2007 and 2013,” MMWR Surv. Summ., vol. 66, no. 9, pp. 1–16, 2017, doi: 10.15585/mmwr.ss6609a1.
  • [6] A. Colantonio, D. Mroczek, J. Patel, J. Lewko, J. Fergenbaum, and R. Brison, “Examining occupational traumatic brain injury in Ontario.” Can. J. Public Health-Rev. Can. Sante Publ., vol. 101 Suppl 1, no. S1, pp. S58–S62, mar 2010, doi: 10.1007/bf03403848.
  • [7] A.M. Salem, B.A. Jaumally, K. Bayanzay, K. Khoury, and A. Torkaman, “Traumatic brain injuries from work accidents: A retrospective study,” Occup. Med., vol. 63, no. 5, pp. 358–360, 2013, doi: 10.1093/occmed/kqt037.
  • [8] EU-OSHA, “Directive 89/656/EEC – use of personal protective equipment,” 1989.
  • [9] G.P. Jirka and W. Thompson, “Personal protective equipment,” pp. 493–508, 2009, doi: 10.1201/9781420071825-29.
  • [10] M.-W. Park, N. Elsafty, and Z. Zhu, “Hardhat-Wearing Detection for Enhancing On-Site Safety of Construction Workers,” J. Constr. Eng. Manage., vol. 141, no. 9, p. 04015024, 2015, doi: 10.1061/(asce)co.1943-7862.0000974.
  • [11] W. Tun, J.-H. Kim, Y. Jeon, S. Kim, and J.-W. Lee, “Safety Helmet and Vest Wearing Detection Approach by Integrating YOLO and SVM for UAV,” in Korean Society for Aeronautical and Space Sciences 2020 Spring Conference, 2020. [On-line]. Available: https://www.dbpia.co.kr/Journal/articleDetail?nodeId=NODE10442178.
  • [12] M. Memarzadeh, A. Heydarian, M. Golparvar-Fard, and J.C. Niebles, “Real-time and automated recognition and 2D tracking of construction workers and equipment from site video streams,” Congress on Computing in Civil Engineering, Proceedings, pp. 429–436, 2012, doi: 10.1061/9780784412343.0054.
  • [13] Q. Fang, H. Li, X. Luo, L. Ding, H. Luo, T. M. Rose, and W. An, “Detecting non-hardhat-use by a deep learning method from farfield surveillance videos,” Autom. Constr., vol. 85, pp. 1–9, 2018, doi: 10.1016/j.autcon.2017.09.018.
  • [14] N.D. Nath, A.H. Behzadan, and S.G. Paal, “Deep learning for site safety: Real-time detection of personal protective equipment,” Autom. Constr., vol. 112, p. 103085, 2020, doi: 10.1016/j.autcon.2020.103085.
  • [15] J. Shen, X. Xiong, Y. Li, W. He, P. Li, and X. Zheng, “Detecting safety helmet wearing on construction sites with bounding-box regression and deep transfer learning,” Comput.-Aided Civil Infrastruct. Eng., vol. 36, no. 2, pp. 180–196, 2021, doi: 10.1111/mice.12579.
  • [16] S. Chen and K. Demachi, “A vision-based approach for ensuring proper use of personal protective equipment (PPE) in decommissioning of fukushima daiichi nuclear power station,” Appl. Sci., vol. 10, no. 15, p. 5129, 2020, doi: 10.3390/app10155129.
  • [17] S. Chen and K. Demachi, “Towards on-site hazards identification of improper use of personal protective equipment using deep learning-based geometric relationships and hierarchical scene graph,” Autom. Constr., vol. 125, p. 103619, 2021, doi: 10.1016/j.autcon.2021.103619.
  • [18] R. Xiong and P. Tang, “Pose guided anchoring for detecting proper use of personal protective equipment,” Autom. Constr., vol. 130, p. 103828, 2021, doi: 10.1016/j.autcon.2021.103828.
  • [19] M. Ochmański, G. Modoni, and J. Bzówka, “Prediction of the diameter of jet grouting columns with artificial neural networks,” Soils Found., vol. 55, no. 2, pp. 425–436, apr 2015, doi: 10.1016/j.sandf.2015.02.016.
  • [20] J. Shen, X. Xiong, Z. Xue, and Y. Bian, “A convolutional neural-network-based pedestrian counting model for various crowded scenes,” Comput.-Aided Civil Infrastruct. Eng., vol. 34, no. 10, pp. 897–914, 2019, doi: 10.1111/mice.12454.
  • [21] X. Luo, H. Li, Y. Yu, C. Zhou, and D. Cao, “Combining deep features and activity context to improve recognition of activities of workers in groups,” Comput.-Aided Civil Infrastruct. Eng., vol. 35, no. 9, pp. 965–978, 2020, doi: 10.1111/mice.12538.
  • [22] Y. Gao and K.M. Mosalam, “Deep Transfer Learning for Image-Based Structural Damage Recognition,” Comput.-Aided Civil Infrastruct. Eng., vol. 33, no. 9, pp. 748–768, sep 2018, doi: 10.1111/mice.12363.
  • [23] M. Żarski, B. Wójcik, and J.A. Miszczak, “KrakN: Transfer Learning framework for thin crack detection in infrastructure maintenance,” arXiv, apr 2020. [Online]. Available: http://arxiv.org/abs/2004.12337.
  • [24] W. Fang, L. Ding, P. E. Love, H. Luo, H. Li, F. Peña-Mora, B. Zhong, and C. Zhou, “Computer vision applications in construction safety assurance,” Autom. Constr., vol. 110, p. 103013, 2020, doi: 10.1016/j.autcon.2019.103013.
  • [25] W. Fang, P.E. Love, H. Luo, and L. Ding, “Computer vision for behaviour-based safety in construction: A review and future directions,” Adv. Eng. Inform., vol. 43, p. 100980, 2020, doi: 10.1016/j.aei.2019.100980.
  • [26] B.H. Guo, Y. Zou, Y. Fang, Y.M. Goh, and P.X. Zou, “Computer vision technologies for Saf. Sci. and management in construction: A critical review and future research directions,” Saf. Sci., vol. 135, p. 105130, 2021, doi: 10.1016/j.ssci.2020.105130.
  • [27] Q. Fang, H. Li, X. Luo, L. Ding, T.M. Rose, W. An, and Y. Yu, “A deep learning-based method for detecting non-certified work on construction sites,” Adv. Eng. Inform., vol. 35, pp. 56–68, 2018, doi: 10.1016/j.aei.2018.01.001.
  • [28] W. Fang, L. Ding, H. Luo, and P. E. Love, “Falls from heights: A computer vision-based approach for safety harness detection,” Autom. Constr., vol. 91, pp. 53–61, 2018, doi: 10.1016/j.autcon.2018.02.018.
  • [29] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, jun 2017, doi: 10.1109/TPAMI.2016.2577031. [Online]. Available: http://ieeexplore.ieee.org/document/7485869/.
  • [30] W. Fang, B. Zhong, N. Zhao, P. E. Love, H. Luo, J. Xue, and S. Xu, “A deep learning-based approach for mitigating falls from height with computer vision: Convolutional neural network,” Adv. Eng. Inform., vol. 39, pp. 170–177, 2019, doi: 10.1016/j.aei.2018.12.005.
  • [31] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” in Proceedings of the IEEE International Conference on Computer Vision, vol. 2017-October, 2017, pp. 2980–2988, doi: 10.1109/ICCV.2017.322.
  • [32] Y. Zhao, Q. Chen, W. Cao, J. Yang, J. Xiong, and G. Gui, “Deep Learning for Risk Detection and Trajectory Tracking at Construction Sites,” IEEE Access, vol. 7, pp. 30 905–30 912, 2019, doi: 10.1109/ACCESS.2019.2902658.
  • [33] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” 2018. [Online]. Available: http://arxiv.org/abs/1804.02767
  • [34] R. Wei, P.E. Love, W. Fang, H. Luo, and S. Xu, “Recognizing people’s identity in construction sites with computer vision: A spatial and temporal attention pooling network,” Adv. Eng. Inform., vol. 42, p. 100981, 2019, doi: 10.1016/j.aei.2019.100981.
  • [35] S. Tang, D. Roberts, and M. Golparvar-Fard, “Human-object interaction recognition for automatic construction site safety inspection,” Autom. Constr., vol. 120, p. 103356, 2020, doi: 10.1016/j.autcon.2020.103356.
  • [36] H. Luo, J. Liu, W. Fang, P.E. Love, Q. Yu, and Z. Lu, “Real-time smart video surveillance to manage safety: A case study of a transport mega-project,” Adv. Eng. Inform., vol. 45, p. 101100, aug 2020, doi: 10.1016/j.aei.2020.101100.
  • [37] N. Khan, M. R. Saleem, D. Lee, M.W. Park, and C. Park, “Utilizing safety rule correlation for mobile scaffolds monitoring lever-aging deep convolution neural networks,” Comput. Ind., vol. 129, p. 103448, 2021, doi: 10.1016/j.compind.2021.103448.
  • [38] B.E. Mneymneh, M. Abbas, and H. Khoury, “Vision-Based Framework for Intelligent Monitoring of Hardhat Wearing on Construction Sites,” J. Comput. Civil. Eng., vol. 33, no. 2, p. 04018066, 2019, doi: 10.1061/(asce)cp.1943-5487.0000813.
  • [39] J. Wu, N. Cai, W. Chen, H. Wang, and G. Wang, “Automatic detection of hardhats worn by construction personnel: A deep learning approach and benchmark dataset,” Autom. Constr., vol. 106, p. 102894, 2019, doi: 10.1016/j.autcon.2019.102894.
  • [40] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.Y. Fu, and A.C. Berg, “SSD: Single shot multibox detector,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9905 LNCS, 2016, pp. 21–37, doi: 10.1007/978-3-319-46448-0_2.
  • [41] L. Wang, L. Xie, P. Yang, Q. Deng, S. Du, and L. Xu, “Hardhat-wearing detection based on a lightweight convolutional neural network with multi-scale features and a top-down module,” Sensors, vol. 20, no. 7, 2020, doi: 10.3390/s20071868.
  • [42] A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” 2017. [Online]. Available: http://arxiv.org/abs/1704.04861.
  • [43] F. Zhou, H. Zhao, and Z. Nie, “Safety Helmet Detection Based on YOLOv5,” in Proceedings of 2021 IEEE International Conference on Power Electronics, Computer Applications, ICPECA 2021, 2021, pp. 6–11, doi: 10.1109/ICPECA51329.2021.9362711.
  • [44] S. Cai, W. Zuo, and L. Zhang, “Higher-Order Integration of Hierarchical Convolutional Activations for Fine-Grained Visual Categorization,” in Proceedings of the IEEE International Conference on Computer Vision, vol. 2017-October, 2017, pp. 511–520, doi: 10.1109/ICCV.2017.63.
  • [45] W. Luo, X. Yang, X. Mo, Y. Lu, L. Davis, J. Li, J. Yang, and S.N. Lim, “Cross-X learning for fine-grained visual categorization,” in Proceedings of the IEEE International Conference on Computer Vision, vol. 2019-October, 2019, pp. 8241–8250, doi: 10.1109/ICCV.2019.00833.
  • [46] J. Han, X. Yao, G. Cheng, X. Feng, and D. Xu, “P-CNN: Part-Based Convolutional Neural Networks for Fine-Grained Visual Categorization,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2019, doi: 10.1109/tpami.2019.2933510.
  • [47] T.Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017-January. IEEE, jul 2017, pp. 936–944, doi: 10.1109/CVPR.2017.106.
  • [48] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December. IEEE, jun 2016, pp. 770–778, doi: 10.1109/CVPR.2016.90. [Online]. Available: http://ieeexplore.ieee.org/document/7780459/.
  • [49] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings – 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017-January, 2017, pp. 5987–5995, doi: 10.1109/CVPR.2017.634.
  • [50] J. Wu, H. Zheng, B. Zhao, Y. Li, B. Yan, R. Liang, W. Wang, S. Zhou, G. Lin, Y. Fu, Y. Weng, and Y. Wang, “Large-scale datasets for going deeper in image understanding,” pp. 1480–1485, 2019, doi: 10.1109/ICME.2019.00256.
  • [51] L. Xie, “Hardhat,” 2019, doi: 10.7910/DVN/7CBGOS. [On-line]. Available: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/7CBGOS.
  • [52] T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C.L. Zitnick, and P. Dollár, “Microsoft {COCO}: {Common Objects in Context},” 2015.
  • [53] Y. Wu, A. Kirillov, F. Massa, W.-Y. Lo, and R. Girshick, “Detectron2,” 2019. [Online]. Available: https://github.com/facebookresearch/detectron2.
  • [54] S. Bianco, R. Cadene, L. Celona, and P. Napoletano, “Benchmark analysis of representative deep neural network architectures,” IEEE Access, vol. 6, pp. 64 270–64 277, 2018, doi: 10.1109/access.2018.2877890.
  • [55] S. Guo, D. Li, Z. Wang, and X. Zhou, “Safety Helmet Detection Method Based on Faster R-CNN,” in Communications in Computer and Information Science, X. Sun, J. Wang, and E. Bertino, Eds., vol. 1253 CCIS, 2020, pp. 423–434, doi: 10.1007/978-981-15-8086-4_40.
  • [56] D. Blalock, J. J. G. Ortiz, J. Frankle, and J. Guttag, “What is the State of Neural Network Pruning?” 2020. [Online]. Available: http://arxiv.org/abs/2003.03033.
  • [57] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520, doi: 10.1109/CVPR.2018.00474.
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-8fae5080-f779-4423-85f5-2cf1cbf1848a
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.