PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Improving real-time performance of U-nets for machine vision in laser process control

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Konferencja
Federated Conference on Computer Science and Information Systems (14 ; 01-04.09.2019 ; Leipzig, Germany)
Języki publikacji
EN
Abstrakty
EN
Many industrial machine vision problems, particularly real-time control of manufacturing processes such as laser cladding, require robust and fast image processing. The inherent disturbances in images acquired during these processes makes classical segmentation algorithms uncertain. Among many convolutional neural networks introduced recently to solve such difficult problems, U-Net balances simplicity with segmentation accuracy. However, it is too computationally intensive for usage in many real-time processing pipelines. In this work we present a method of identifying the most informative levels of detail in the U-Net. By only processing the image at the selected levels, we reduce the total computation time by 80%, while still preserving adequate quality of segmentation.
Rocznik
Tom
Strony
29--33
Opis fizyczny
Bibliogr. 18 poz., rys., tab.
Twórcy
  • Wrocław University of Science and Technology, ul. Wyb. Wyspianskiego 27, 50-370 Wrocław, Poland
autor
  • Wrocław University of Science and Technology, ul. Wyb. Wyspianskiego 27, 50-370 Wrocław, Poland
Bibliografia
  • 1. W. Rafajłowicz, P. Jurewicz, J. Reiner, and E. Rafajłowicz, “Iterative Learning of Optimal Control for Nonlinear Processes With Applications to Laser Additive Manufacturing,” IEEE Transactions on Control Systems Technology, pp. 1–8, 2018. https://doi.org/10.1109/TCST.2018.2865444
  • 2. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, 2015, pp. 234–241. http://dx.doi.org/10.1007/978-3-319-24574-4_28
  • 3. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3431–3440. https://doi.org/10.1109/TPAMI.2016.2572683
  • 4. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid Scene Parsing Network,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6230–6239.
  • 5. T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature Pyramid Networks for Object Detection,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 936–944. https://doi.org/10.1109/CVPR.2017.106
  • 6. B. Hassibi and D. G. Stork, “Second order derivatives for network pruning: Optimal brain surgeon,” in Advances in neural information processing systems, 1993, pp. 164–171.
  • 7. S. Han, H. Mao, and W. J. Dally, “Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding,” presented at the International Conference on Learning Representations (ICLR 2016), 20176
  • 8. H. Lu, R. Setiono, and Huan Liu, “Effective data mining using neural networks,” IEEE Transactions on Knowledge and Data Engineering, vol. 8, no. 6, pp. 957–961, 1996. https://doi.org/10.1109/69.553163
  • 9. C. Liu et al., “Progressive Neural Architecture Search,” https://arxiv.org/abs/1712.00559 [cs, stat], Dec. 2017. [preprint]
  • 10. R. Luo, F. Tian, T. Qin, E. Chen, and T.-Y. Liu, “Neural Architecture Optimization,” in Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds. Curran Associates, Inc., 2018, pp. 7816–7827.
  • 11. P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz, “Pruning Convolutional Neural Networks for Resource Efficient Inference,” presented at the International Conference on Learning Representations (ICLR 2017), 2017.
  • 12. M. Chevalier, N. Thome, M. Cord, J. Fournier, G. Henaff, and E. Dusch, “Low resolution convolutional neural network for automatic target recognition,” in 7th International Symposium on Optronics in Defence and Security, Paris, France, 2016.
  • 13. G. Larsson, M. Maire, and G. Shakhnarovich, “FractalNet: Ultra-Deep Neural Networks without Residuals,” https://arxiv.org/abs/1605.07648 [cs], May 2016. [preprint]
  • 14. S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” in International Conference on Machine Learning, 2015, pp. 448–456.
  • 15. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778. https://doi.org/10.1109/CVPR.2016.90
  • 16. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
  • 17. P. Y. Simard, D. Steinkraus, and J. C. Platt, “Best practices for convolutional neural networks applied to visual document analysis,” in Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings., 2003, pp. 958–963.
  • 18. A. Paszke et al., “Automatic differentiation in PyTorch,” in NIPS-W, 2017.
Uwagi
1. Track 1: Artificial Intelligence and Applications
2. Technical Session: 14th International Symposium Advances in Artificial Intelligence and Applications
3. Opracowanie rekordu ze środków MNiSW, umowa Nr 461252 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2020).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-88b3f2b7-86ba-4bec-986a-5ab103a55d81
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.