PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Texture recognition system based on the Deep Neural Network

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
This paper presents a deep learning-based image texture recognition system. The methodology taken in this solution is formed in a bottom-up manner. It means we swipe a moving window through the image in order to categorize if a given region belongs to one of the classes seen in the training process. This categorization is done based on the Deep Neural Network (DNN) of fixed architecture. The training process is fully automated regarding the training data preparation, investigation of the best training algorithm, and its hyper-parameters. The only human input to the system is the definition of the categories for further recognition and generation of the samples (region markings) in the external application chosen by the user. The system is tested on road surface images where its task is to categorize image regions to a different road category (e.g. curb, road surface damage, etc.) and is featured with 90% and above accuracy.
Rocznik
Strony
1503--1511
Opis fizyczny
Bibliogr. 26 poz., rys., tab.
Twórcy
autor
  • Poznan University of Technology, ul. Piotrowo 3A, 60-965 Poznan, Poland
Bibliografia
  • [1] F. Zhou, J.F. Feng, and Q.Y. Shi, “Texture feature based on local Fourier transform”, Proceedings 2001 International Conference on Image Processing, Thessaloniki, Greece, 2001, vol. 2, pp. 610–613.
  • [2] P. Wu, Y. Choi, Y. Ro, and C. Won, “MPEG-7 Texture Descriptors”, Int. J. Image Graph. 1, 547–563, (2001).
  • [3] R. Kapela, A. Rybarczyk, P. Śniatała, and R. Rudnicki, “Hardware realization of the MPEG-7 Edge Histogram Descriptor”, Mixed Design of Integrated Circuits and Systems, MIXDES, Gdynia, Poland, 2006, pp. 675–678.
  • [4] R. Kapela and A. Rybarczyk, “A real-time shape description system based on MPEG-7 descriptors”, J. Syst. Architect. 53, 602–618 (2007).
  • [5] A. Abdelhafiz and Y. Mostafa, “Automatic texture mapping mega-projects”, J. Spat. Sci. 65(3), 467–479 (2020), doi: 10.1080/14498596.2018.1536002.
  • [6] T. Hermes and A. Miene, “Automatic Texture Classification by Visual Properties”, in: Classification and Information Processing at the Turn of the Millennium. Studies in Classification, Data Analysis, and Knowledge Organization, R. Decker, W. Gaul (eds), Springer, Berlin, Heidelberg, 2002, doi: 10.1007/978-3-642-57280-7_24.
  • [7] Lei Qin, Weiqiang Wang, Qingming Huang and Wen Gao, “Unsupervised Texture Classification: Automatically Discover and Classify Texture Patterns”, in 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, 2006, pp. 433–436, doi: 10.1109/ICPR.2006.1146.
  • [8] Y. Guo, G. Zhao, M. Pietikainen, and Z. Xu, “Descriptor Learning Based on Fisher Separation Criterion for Texture Classification”, in: Computer Vision – ACCV 2010. ACCV 2010. Lecture Notes in Computer Science, R. Kimmel, R. Klette, A. Sugimoto (eds), vol. 6494, pp. 185–198, Springer, Berlin, Heidelberg, 2010, doi: 10.1007/978-3-642-19318-7_15
  • [9] R.M. Anwer, F.S. Khan,J. van de Weijer, M. Molinier, and J. Laaksonen, “Binary patterns encoded convolutional Neural Netw. for texture recognition and remote sensing scene classification”, ISPRS-J. Photogramm. Remote Sens. 138, 74–85 (2018), doi: 10.1016/j.isprsjprs.2018.01.023.
  • [10] S. Basu et al., “Deep neural networks for texture classification – A theoretical analysis”, Neural Netw. 97, 173–182 (2018), doi: 10.1016/j.neunet.2017.10.001
  • [11] G. Zhu, B. Li, S. Hong, and B. Mao, “Texture Recognition and Classification Based on Deep Learning”, Sixth International Conference on Advanced Cloud and Big Data (CBD), Lanzhou, 2018, pp. 344–348. doi: 10.1109/CBD.2018.00068.
  • [12] Y. Jia et al., “Caffe: Convolutional architecture for fast feature embedding”, arXiv preprint arXiv:1408.5093, 2014.
  • [13] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier Neural Networks”, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, JMLR Workshop and Conference Proceedings, PMLR, USA, vol. 15, pp. 315–323. http://proceedings.mlr.press/v15/glorot11a/glorot11a.pdf.
  • [14] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks”, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR, 2010, vol. 9, pp. 249–256, http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf.
  • [15] R. Kapela et al., “Asphalt surfaced pavement cracks detection based on histograms of oriented gradients”, 22nd International Conference Mixed Design of Integrated Circuits & Systems (MIXDES), Torun, 2015, pp. 579–584.
  • [16] M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi, “Describing Textures in the Wild”, Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2014.
  • [17] J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient sethods for snline learning and stochastic optimization”, J. Mach. Learn. Res. 12, 2121–2159 (2011).
  • [18] S.F. Ershad, “A New Benchmark Dataset for Texture Image Analysis and Surface Defect Detection”, CoRR, arXiv:1906. 11561, 2019, doi: 10.13140/RG.2.2.33612.46722.
  • [19] R. Paget and D. Longstaff, “Nonparametric Markov random field model analysis of the MeasTex test suite”, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000, Barcelona, Spain, 2000, pp. 927–930 vol. 3, doi: 10.1109/ICPR.2000.903696.
  • [20] F. Liu, Z. Tang, and J. Tang, “WLBP: Weber local binary pattern for local image description”, Neurocomputing 120, 325–335 (2013).
  • [21] N Borodinov et al., “Machine learning-based multidomain processing for texture-based image segmentation and analysis”, Appl. Phys. Lett. 116, 044103 (2020), doi: 10.1063/1.5135328.
  • [22] D. Koundai, “Texture-based image segmentation using neutrosophic clustering”, IET Image Process. 11(8), 640–645 (2017), doi: 10.1049/iet-ipr.2017.0046.
  • [23] T. Markiewicz, Z. Swiderska-Chadaj, J. Gallego, G. Bueno, B. Grala, and M. Lorent, “Deep learning for damaged tissue detection and segmentation in Ki-67 brain tumor specimens based on the U-net model”, Bull. Pol. Ac.: Tech. 66(6), 849–856 (2018).
  • [24] T. Poggio and Q. Liao, “Theory I: Deep networks and the curse of dimensionality”, Bull. Pol. Ac.: Tech. 66(6) 761–773 (2018), doi: 10.24425/bpas.2018.125924.
  • [25] T. Poggio and Q. Liao, “Theory II: Deep learning and optimization”, Bull. Pol. Ac.: Tech. 66(6), 775–787 (2018), doi: 10.24425/bpas.2018.125925.
  • [26] M. Grochowski, A. Kwasigroch, and A. Mikołajczyk, “Selected technical issues of deep Neural Netw. for image classification purposes”, Bull. Pol. Ac.: Tech. 67(2), 363–376 (2019).
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa Nr 461252 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2021).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-a000db71-d2b8-4aad-86a5-49b6aff9f816
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.