PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Ensemble of classifiers based on CNN for increasing generalization ability in face image recognition

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The paper considers the problem of increasing the generalization ability of classification systems by creating an ensemble of classifiers based on the CNN architecture. Different structures of the ensemble will be considered and compared. Deep learning fulfills an important role in the developed system. The numerical descriptors created in the last locally connected convolution layer of CNN flattened to the form of a vector, are subjected to a few different selection mechanisms. Each of them chooses the independent set of features, selected according to the applied assessment techniques. Their results are combined with three classifiers: softmax, support vector machine, and random forest of the decision tree. All of them do simultaneously the same classification task. Their results are integrated into the final verdict of the ensemble. Different forms of arrangement of the ensemble are considered and tested on the recognition of facial images. Two different databases are used in experiments. One was composed of 68 classes of greyscale images and the second of 276 classes of color images. The results of experiments have shown high improvement of class recognition resulting from the application of the properly designed ensemble.
Rocznik
Strony
art. no. e141004
Opis fizyczny
Bibliogr. 27 poz., rys., tab.
Twórcy
  • Faculty of Electrical Engineering, Warsaw University of Technology, Koszykowa 75, 00-662 Warszawa, Poland
  • Faculty of Electronic Engineering, Military University of Technology, gen. S. Kaliskiego 2, 00-908 Warszawa, Poland
Bibliografia
  • [1] Poggio and Q. Liao, “Theory I: Deep networks, the curse of dimensionality,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 66, no. 6, pp. 761–773, 2018, doi: 10.24425/bpas.2018.125924.
  • [2] Q. Zheng, M. Yang, J. Yang, Q. Zhang, and X. Zhang, “Improvement of generalization ability of deep CNN via implicit regularization in two-stage training process,” IEEE Access, vol. 6, pp. 15844–15869, 2018, doi: 10.1109/ACCESS.2018.2810849.
  • [3] P. Zhou and J. Feng, “Understanding generalization, optimization performance of deep CNNs,” in Proceedings of the 35th International Conference on Machine Learning (PMLR 80), Stockholm, Sweden, 2018.
  • [4] V. Vapnik, Statistical learning theory, Wiley, New York, 1998.
  • [5] B. Swiderski, L. Gielata, P. Olszewski, S. Osowski, and M. Kołodziej, “Deep neural system for supporting tumor recognition of mammograms using modified GAN,” Expert Syst. Appl., vol. 164, pp. 1–10, 2021, doi: 10.1016/j.eswa.2020.113968.
  • [6] L. Kuncheva, Combining pattern classifiers: methods and algorithms, Wiley, New York, 2004.
  • [7] H. Bonab and F. Can, “Less is more: a comprehensive framework for the number of components of ensemble classifiers,” IEEE Trans. Neural Networks Learn. Syst., vol. 14, pp. 2735–2745, 2018, doi: 10.1109/TNNLS.2018.2886341.
  • [8] I. Goodfellow, Y. Bengio, and A Courville, Deep learning, MIT Press, 2016.
  • [9] F. Chollet, Deep Learning with Python, Manning Publications Co., 2017.
  • [10] A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” NIPS’12: Proceedings of the 25th International Conference on Neural Information Processing Systems – 1, 2012, 1097–1105, doi: 10.1145/3065386.
  • [11] P.N. Tan, M. Steinbach, and V Kumar, Introduction to data mining, Boston: Pearson Education Inc., 2006.
  • [12] R. Robnik-Sikonja and I. Kononenko, “Theoretical and empirical analysis of ReliefF and RReliefF,” Mach. Learn., vol. 53, pp. 23–69, 2003, doi: 10.1023/A:1025667309714.
  • [13] W. Yang, K. Wang, and W. Zuo., “Neighborhood component feature selection for high-dimensional data,” J. Comput., vol. 7, pp. 161–168, 2012, 10.4304/jcp.7.1.161-168.
  • [14] B. Schölkopf and A. Smola, Learning with kernels, Cambridge, MIT Press, MA, 2002.
  • [15] L. Breiman, “Random forests,” Mach. Learn., vol. 45, no. 11, pp. 5–32, 2001, doi: 10.1023/A:1010933404324.
  • [16] Matlab user manual, MathWorks, Natick, USA, 2021a.
  • [17] C.C. Loy et al., “Editorial: Special issue on deep learning for face analysis,” Int. J. Comput. Vision, vol. 127, pp. 533–536, 2019, doi: 10.1007/s11263-019-01179-z.
  • [18] M. Wang and W. Deng, “Deep face recognition: a survey,” Neurocomputing, vol. 429, pp. 215–244, 2021, doi: 10.1016/j.neucom.2020.10.081.
  • [19] P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, “Eigenfaces vs. Fisherfaces: recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, pp. 711–720, 1997, doi: 10.1109/34.598228.
  • [20] K. Siwek and S. Osowski, “Comparison of methods of feature generation for face recognition,” Przegląd Elektrotechniczny, vol. 90, pp. 206–209, 2014, doi: 10.12915/pe.2014.04.49.
  • [21] M.M. Ghazi, H.K. Ekenel, “A comprehensive analysis of deep learning based representation for face recognition,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2016, doi: 10.1109/CVPRW. 2016.20.
  • [22] S. Milborrow, J. Morkel, and F. Nicolls, “The MUCT landmarked face database,” Pattern Recognition Association of South Africa – database 2010.
  • [23] H.C. Peng, F. Long, and C. Ding, “Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 8, pp. 1226–1238, 2005, doi: 10.1109/TPAMI.2005.159.
  • [24] B. Belavadi, K.V.M. Prashanth, G. Sanjay, and J. Shruthi, “Gabor Features for Single Sample Face Recognition on Multicolor Space Domain,” 2017 International Conference on Recent Advances in Electronics and Communication Technology, 2017, doi: 10.1109/ICRAECT.2017.23.
  • [25] T. Marciniak, A. Chmielewska, R. Weychan, M. Parzych, and A. Dabrowski, “Influence of low resolution of images on reliability of face detection and recognition,” Multimed. Tools Appl, vol. 74, pp. 4329–4349, 2015, doi: 10.1007/s11042-013-1568-8.
  • [26] J.A.C. Moreano, N.B. La Serna Palomino, “Efficient Technique for Facial Image Recognition with Support Vector Machines in 2D Images with Cross-Validation in Matlab,” WSEAS Trans. Syst. Control, vol. 15, pp. 175–183, 2020, doi: 10.37394/23203.2020. 15.18.
  • [27] M. Grupp, P. Kopp, P. Huber, and M. Ratsch, “A 3D face modelling approach for pose-invariant face recognition in a human robot environment,” in RoboCup 2016: Robot World Cup XX. RoboCup 2016. Lecture Notes in Computer Science, S. Behnke et al. Eds., Springer, vol 9776, pp. 121–134, 2017, doi: 10.1007/978-3-319-68792-6_10.
Uwagi
Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2022-2023).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-2bba7169-e2c5-4129-b4e9-695fc54b4b98
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.