Tytuł artykułu
Treść / Zawartość
Pełne teksty:
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
Face Sketch Recognition (FSR) presents a severe challenge to conventional recognition paradigms developed basically to match face photos. This challenge is mainly due to the large texture discrepancy between face sketches, characterized by shape exaggeration, and face photos. In this paper, we propose a training-free synthesized face sketch recognition method based on the rank-level fusion of multiple Image Quality Assessment (IQA) metrics. The advantages of IQA metrics as a recognition engine are combined with the rank-level fusion to boost the final recognition accuracy. By integrating multiple IQA metrics into the face sketch recognition framework, the proposed method simultaneously performs face-sketch matching application and evaluates the performance of face sketch synthesis methods. To test the performance of the recognition framework, five synthesized face sketch methods are used to generate sketches from face photos. We use the Borda count approach to fuse four IQA metrics, namely, structured similarity index metric, feature similarity index metric, visual information fidelity and gradient magnitude similarity deviation at the rank-level. Experimental results and comparison with the state-of-the-art methods illustrate the competitiveness of the proposed synthesized face sketch recognition framework.
Rocznik
Tom
Strony
art. no. e143554
Opis fizyczny
Bibliogr. 44 poz., rys., tab.
Twórcy
autor
- University M’Hamed Bougara of Boumerdes, Institute of Electrical and Electronic Engineering, Laboratory of Signals and Systems, Boumerdes, 35000, Algeria
autor
- University M’Hamed Bougara of Boumerdes, Institute of Electrical and Electronic Engineering, Laboratory of Signals and Systems, Boumerdes, 35000, Algeria
autor
- Center for Development of Advanced Technologies, P.O. Box 17 Baba-Hassen 16303, Algiers, Algeria
autor
- Sorbonne University Abu Dhabi, Sorbonne Center for Artificial Intelligence, Abu Dhabi, UAE
Bibliografia
- [1] N. Balayesu and H.K. Kalluri, “An extensive survey on traditional and deep learning-based face sketch synthesis models,” Int. J. Inf. Technol., vol. 12, no. 3, pp. 995–1004, Nov 2020, doi: 10.1007/s41870-019-00386-8.
- [2] Y. Fang, W. Deng, J. Du, and J. Hu, “Identity-aware CycleGAN for face photo-sketch synthesis and recognition,” Pattern Recognit., vol. 102, p. 107249, Jun 2020, doi: 10.1016/j.patcog.2020.107249.
- [3] J.C. Klontz and A.K. Jain, “A case study of automated face recognition: The boston marathon bombings suspects,” Computer, vol. 46, no. 11, pp. 91–94, Nov 2013, doi: 10.1109/MC.2013.377.
- [4] X. Wang and X. Tang, “Face photo-sketch synthesis and recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 11, pp. 1955–1967, Nov 2009, doi: 10.1109/TPAMI.2008.222.
- [5] P. Li, B. Sheng, and C.L.P. Chen, “Face sketch synthesis using regularized broad learning system,” IEEE Trans. Neural Networks Learn. Syst., pp. 1–15, 2021, doi: 10.1109/TNNLS.2021.3070463.
- [6] Q. Liu, X. Tang, H. Jin, H. Lu, and S. Ma, “A nonlinear approach for face sketch synthesis and recognition,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1. IEEE, 2005, pp. 1005–1010, doi: 10.1109/CVPR.2005.39.
- [7] Y. Song, L. Bao, Q. Yang, and M.-H. Yang, “Real-time exemplar-based face sketch synthesis,” in Computer Vision – ECCV 2014. Springer International Publishing, 2014, pp. 800–813, doi: 10.1007/978-3-319-10599-451.
- [8] H. Zhou, Z. Kuang, and K.K. Wong, “Markov weight fields for face sketch synthesis,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Jun 2012, pp. 1091–1097, doi: 10.1109/cvpr.2012.6247788.
- [9] N. Wang, M. Zhu, J. Li, B. Song, and Z. Li, “Data-driven vs. model-driven: Fast face sketch synthesis,” Neurocomputing, vol. 257, pp. 214–221, Sep 2017, doi: 10.1016/j.neucom.2016.07.071.
- [10] A. Cichocki, T. Poggio, S. Osowski, and V. Lempitsky, “Deep learning: Theory and practice,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 66, no. 6, pp. 757–759, 2018, doi: 10.24425/bpas.2018.125923.
- [11] X. Yang, Y. Zhang, and D. Zhou, “Deep networks for image super-resolution using hierarchical features,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 70, no. 1, p. e139616, 2022, doi: 10.24425/bpasts.2021.139616.
- [12] K. Hawari and I. Ismail, “The automatic focus segmentation of multi-focus image fusion,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 70, no. 1, p. e140352, 2022, doi: 10.24425/bpasts.2022.140352.
- [13] M. Grochowski, A. Kwasigroch, and A. Mikołajczyk, “Selected technical issues of deep neural networks for image classification purposes,” Bull. Pol. Acad. Sci. Tech. Sci., vol. 67, no. 2, pp. 363–376, 2019, doi: 10.24425/bpas.2019.128485.
- [14] L. Zhang, L. Lin, X. Wu, S. Ding, and L. Zhang, “End-to-end photo-sketch generation via fully convolutional representation learning,” in Proceedings of the 5th ACM on International Conference on Multimedia Retrieval. ACM, jun 2015, pp. 627–634, doi: 10.1145/2671188.2749321.
- [15] I. Goodfellow et al., “Generative adversarial nets,” Adv. Neural Inf. Process. Syst., vol. 27, 2014, doi: 10.1145/3422622.
- [16] P. Isola, J.-Y. Zhu, T. Zhou, and A.A. Efros, “Image-to-image translation with conditional adversarial networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Jul 2017, pp. 1125–1134, doi: 10.1109/cvpr.2017.632.
- [17] X. Li, F. Gao, and F. Huang, “High-quality face sketch synthesis via geometric normalization and regularization,” in 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEE, Jul 2021, pp. 1–6, doi: 10.1109/ICME51207.2021.9428348.
- [18] J. Yu, X. Xu, F. Gao, S. Shi, M. Wang, D. Tao, and Q. Huang, “Toward realistic face photo–sketch synthesis via composition-aided GANs,” IEEE Trans. Cybern., vol. 51, no. 9, pp. 4350–4362, Sep 2021, doi: 10.1109/tcyb.2020.2972944.
- [19] W. Wan, Y. Yang, and H.J. Lee, “Generative adversarial learning for detail-preserving face sketch synthesis,” Neurocomputing, vol. 438, pp. 107–121, May 2021, doi: 10.1016/ j.neucom.2021.01.050.
- [20] N. Wang, X. Gao, J. Li, B. Song, and Z. Li, “Evaluation on synthesized face sketches,” Neurocomputing, vol. 214, pp. 991–1000, nov 2016, doi: 10.1016/j.neucom.2016.06.070.
- [21] Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr 2004, doi: 10.1109/tip.2003.819861.
- [22] W. Xue, L. Zhang, X. Mou, and A. C. Bovik, “Gradient magnitude similarity deviation: A highly efficient perceptual image quality index,” IEEE Trans. Image Process., vol. 23, no. 2, pp. 684–695, Feb 2014, doi: 10.1109/tip.2013.2293423.
- [23] N. Wang, J. Li, L. Sun, B. Song, and X. Gao, “Training-free synthesized face sketch recognition using image quality assessment metrics,” arXiv preprint arXiv:1603.07823, 2016, ArXiv:1603.07823.
- [24] B. Xiao and X. Gao, “Visual quality assessment of the synthesized sketch,” in 2013 Ninth International Conference on Natural Computation (ICNC). IEEE, Jul 2013, pp. 317–321, doi: 10.1109/icnc.2013.6817993.
- [25] Y. Lin, K. Fu, S. Ling, J. Wang, and P. Cheng, “Toward identity preserving face synthesis between sketches and photos using deep feature injection,” IEEE Trans. Ind. Inf., vol. 18, no. 1, pp. 327–336, jan 2022, doi: 10.1109/tii.2021.3074989.
- [26] Y. Peng, J. Xu, Z. Luo,W. Zhou, and Z. Chen, “Multi-metric fusion network for image quality assessment,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, Jun 2021, pp. 1857–1860, doi: 10.1109/cvprw 53098.2021.00205.
- [27] C. Peng, X. Gao, N. Wang, and J. Li, “Face recognition from multiple stylistic sketches: Scenarios, datasets, and evaluation,” Pattern Recognit., vol. 84, pp. 262–272, Dec 2018, doi: 10.1016/j.patcog.2018.07.014.
- [28] M. Oszust, “Decision fusion for image quality assessment using an optimization approach,” IEEE Signal Process. Lett., vol. 23, no. 1, pp. 65–69, Jan 2016, doi: 10.1109/LSP.2015.2500819.
- [29] K. Ding, K. Ma, S. Wang, and E. P. Simoncelli, “Comparison of full-reference image quality models for optimization of image processing systems,” Int. J. Comput. Vision, vol. 129, no. 4, pp. 1258–1281, Jan 2021, doi: 10.1007/s11263-020-01419-7.
- [30] N. Wang, X. Gao, L. Sun, and J. Li, “Bayesian face sketch synthesis,” IEEE Trans. Image Process., vol. 26, no. 3, pp. 1264–1274, Mar 2017, doi: 10.1109/TIP.2017.2651375.
- [31] H. Sheikh and A. Bovik, “Image information and visual quality,” IEEE Trans. Image Process., vol. 15, no. 2, pp. 430–444, Feb 2006, doi: 10.1109/tip.2005.859378.
- [32] L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: A feature similarity index for image quality assessment,” IEEE Trans. Image Process., vol. 20, no. 8, pp. 2378–2386, Aug 2011, doi: 10.1109/tip.2011.2109730.
- [33] K. Okarma, P. Lech, and V.V. Lukin, “Combined full-reference image quality metrics for objective assessment of multiply distorted images,” Electronics, vol. 10, no. 18, p. 2256, sep 2021, doi: 10.3390/electronics10182256.
- [34] S. Athar and Z. Wang, “A comprehensive performance evaluation of image quality assessment algorithms,” IEEE Access, vol. 7, pp. 140 030–140 070, 2019, doi: 10.1109/ACCESS.2019.2943319.
- [35] A. Kumar and S. Shekhar, “Personal identification using multibiometrics rank-level fusion,” IEEE Trans. Syst. Man Cybern. Part C Appl. Rev., vol. 41, no. 5, pp. 743–752, Sep 2011, doi: 10.1109/TSMCC.2010.2089516.
- [36] A. Abaza and A. Ross, “Quality based rank-level fusion in multibiometric systems,” in 2009 IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems. IEEE, Sep 2009, pp. 1–6, doi: 10.1109/BTAS.2009.5339081.
- [37] E.M.S. Niou, “A note on nanson’s rule,” Public Choice, vol. 54, no. 2, pp. 191–193, 1987, doi: 10.1007/BF00123006.
- [38] X. Tang and X. Wang, “Face photo recognition using sketch,” in Proceedings. International Conference on Image Processing, vol. 1. IEEE, 2002, pp. I–I, doi: 10.1109/ICIP.2002.1038008.
- [39] A.M. Martinez, “The AR face database,” CVC Technical Report24, 1998. [Online]. Available: https://www2.ece.ohio-state.edu/~aleix/ARdatabase.html
- [40] K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre, “Xm2vtsdb: The extended m2vts database,” in Second International Conference on Audio and Video-Based Biometric Person Authentication, vol. 964, 1999, pp. 965–966. [Online]. Available: https://www.semanticscholar.org/paper/b62628ac06bbac998a3ab825324a41a11bc3a988
- [41] Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman, “VGGFace2: A dataset for recognising faces across pose and age,” in 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, May 2018, pp. 67–74, doi: 10.1109/FG.2018.00020.
- [42] K.L. Hermann, T. Chen, and S. Kornblith, “The origins and prevalence of texture bias in convolutional neural networks,” arXiv preprint arXiv:1911.09071, 2019, arxiv.org/abs/1911.09071v3.
- [43] D.-P. Fan, S. Zhang, Y.-H. Wu, Y. Liu, M.-M. Cheng, B. Ren, P. Rosin, and R. Ji, “Scoot: A perceptual metric for facial sketches,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, oct 2019, pp. 5612–5622, doi: 10.1109/iccv.2019.00571.
- [44] M. Cho, T. Kim, I.-J. Kim, K. Lee, and S. Lee, “Relational deep feature learning for heterogeneous face recognition,” IEEE Trans. Inf. Forensics Secur., vol. 16, pp. 376–388, 2021, doi: 10.1109/TIFS.2020.3013186.
Uwagi
Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2022-2023).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-3a6e468d-0cce-4588-beeb-1aeea2b0712a