PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Visual detection of milling surface roughness based on improved YOLOV5

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Workpiece surface roughness measurement based on traditional machine vision technology faces numerous problems such as complex index design, poor robustness of the lighting environment, and slow detection speed, which make it unsuitable for industrial production. To address these problems, this paper proposes an improved YOLOv5 method for milling surface roughness detection. This method can automatically extract image features and possesses higher robustness in lighting environments and faster detection speed. We have effectively improved the detection accuracy of the model for workpieces located at different positions by introducing Coordinate Attention (CA). The experimental results demonstrate that this study’s improved model achieves accurate surface roughness detection for moving workpieces in an environment with light intensity ranging from 592 to 1060 lux. The average precision of the model on the test set reaches 97.3%, and the detection speed reaches 36 frames per second.
Rocznik
Strony
531--548
Opis fizyczny
Bibliogr. 22 poz., fot., rys., tab., wykr., wzory
Twórcy
autor
  • School of Mechanical and Control Engineering, Guilin University of Technology, Guilin, 541006, People’s Republic of China
autor
  • School of Mechanical and Control Engineering, Guilin University of Technology, Guilin, 541006, People’s Republic of China
autor
  • School of Mechanical and Control Engineering, Guilin University of Technology, Guilin, 541006, People’s Republic of China
autor
  • School of Mechanical and Control Engineering, Guilin University of Technology, Guilin, 541006, People’s Republic of China
autor
  • School of Mechanical Engineering, Yangzhou University, Yangzhou, 225009, People’s Republic of China
Bibliografia
  • [1] Kiran, M. B., Ramamoorthy, B., & Radhakrishnan, V. (1998). Evaluation of surface roughness by vision system. International Journal of Machine Tools and Manufacture, 38(5-6), 685-690. https://doi.org/10.1016/S0890-6955(97)00118-1
  • [2] Gadelmawla, E. S. (2004). A vision system for surface roughness characterization using the graylevel co-occurrence matrix. NDT & e International, 37(7), 577-588. https://doi.org/10.1016/j.ndteint.2004.03.004
  • [3] Huaian, Y. I., Jian, L. I. U., Enhui, L. U., & Peng, A. O. (2016). Measuring grinding surface roughness based on the sharpness evaluation of colour images. Measurement Science and Technology, 27(2), 025404. https://doi.org/10.1088/0957-0233/27/2/025404
  • [4] Zhang, H., Liu, J., Lu, E., Suo, X., & Chen, N. (2019). A novel surface roughness measurement method based on the red and green aliasing effect. Tribology International, 131, 579-590. https://doi.org/10.1016/j.triboint.2018.11.013
  • [5] Somthong, T., & Yang, Q. (2016, May). Surface roughness measurement using photometric stereo method with coordinate measuring machine. In 2016 IEEE International Instrumentation and Measurement Technology Conference Proceedings (pp. 1-6). IEEE. https://doi.org/10.1109/I2MTC.2016.7520329
  • [6] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84-90. https://doi.org/10.1145/3065386
  • [7] Rifai, A. P., Aoyama, H., Tho, N. H., Dawal, S. Z. M., & Masruroh, N. A. (2020). Evaluation of turned and milled surfaces roughness using convolutional neural network. Measurement, 161, 107860. https://doi.org/10.1016/j.measurement.2020.107860
  • [8] He, Y., Zhang, W., Li, Y. F., Wang, Y. L., Wang, Y., & Wang, S. L. (2021). An approach for surface roughness measurement of helical gears based on image segmentation of region of interest. Measurement, 183, 109905. https://doi.org/10.1016/j.measurement.2021.109905
  • [9] Su, J., Yi, H., Ling, L., Wang, S., Jiao, Y., & Niu, Y. (2022). A surface roughness grade recognition model for milled workpieces based on deep transfer learning. Measurement Science and Technology, 33(4), 045014. https://doi.org/10.1088/1361-6501/ac3f86
  • [10] Li, W., Li, F., Luo, Y., & Wang, P. (2020, December). Deep domain adaptive object detection: a survey. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI) (pp. 1808-1813). IEEE. https://doi.org/10.1109/SSCI47803.2020.9308604
  • [11] Zou, Z., Chen, K., Shi, Z., Guo, Y., & Ye, J. (2023). Object detection in 20 years: A survey. Proceedings of the IEEE. https://doi.org/10.48550/arXiv.1905.05055
  • [12] Wu, X., Sahoo, D., & Hoi, S. C. (2020). Recent advances in deep learning for object detection. Neurocomputing, 396, 39-64. https://doi.org/10.1016/j.neucom.2020.01.085
  • [13] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 779-788). https://doi.org/10.1109/CVPR.2016.91
  • [14] Wang, C. Y., Liao, H. Y. M., Wu, Y. H., Chen, P. Y., Hsieh, J. W., & Yeh, I. H. (2020). CSPNet: A new backbone that can enhance learning capability of CNN. In Proceedings of The IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 390-391).
  • [15] Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017). Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2117-2125).
  • [16] Liu, S., Qi, L., Qin, H., Shi, J., & Jia, J. (2018). Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 8759-8768).
  • [17] Zhao, Z., Yang, X., Zhou, Y., Sun, Q., Ge, Z., & Liu, D. (2021). Real-time detection of particle board surface defects based on improved YOLOV5 target detection. Scientific Reports, 11(1), 21777. https://doi.org/10.1038/s41598-021-01084-x
  • [18] Jie, H., Li, S., Gang, S., & Albanie, S. (2017). Squeeze-and-excitation networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(8), 2011-2023. https://doi.org/10.1109/TPAMI.2019.2913372
  • [19] Woo, S., Park, J., Lee, J. Y., & Kweon, I. S. (2018). CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 3-19). https://doi.org/10.1007/978-3-030-01234-2_1
  • [20] Hou, Q., Zhou, D., & Feng, J. (2021). Coordinate attention for efficient mobile network design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 13713-13722). https://doi.org/https://doi.org/10.48550/arXiv.2103.02907
  • [21] Ruder S. (2016). An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747.
  • [22] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 770-778). https://doi.org/10.1109/CVPR.2016.90
Uwagi
1. This work was supported by the National Natural Science Foundation of China (NSFC) (Grant No. 52065016).
2. Opracowanie rekordu ze środków MNiSW, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-0c38338c-3d69-4518-81cb-539f4a2cb25d
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.