PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Deep Classifiers and Wavelet Transformation for Fake Image Detection

Treść / Zawartość
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
The paper presents a computer system for detecting deep fake images in videos. The system is based on continuous wavelet transformation combined with a set of classifiers composed of a few convolutional neural networks of diversified architectures. Three different forms of forged images taken from the FaceForensics++ database are considered in numerical experiments. The results of experiments involving the proposed system have shown good performance in comparison to other current approaches to this particular problem.
Rocznik
Tom
Strony
1--8
Opis fizyczny
Bibliogr. 23 poz., rys., tab., wykr.
Twórcy
  • Institute of Theory of Electrical Engineering, Measurement, and Information Systems Military University of Technology, Warsaw, Poland
  • Warsaw University of Technology, Warsaw, Poland
  • Electronic Faculty, Institute of Electronic Systems Military University of Technology, Warsaw, Poland
Bibliografia
  • [1] A. Rossler et al., “FaceForensics++: Learning to Detect Manipulated Facial Images”, in: Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, South Korea, 2019 (https: //doi.org/10.48550/arXiv.1901.08971).
  • [2] L. Jaing, R. Li, W. Wu, C. Qian, and C.C. Loy, “Deeperforensics- 1. 0: a Large-scale Data Set for Real-world Face Forgery Detection”, 2020 (https://doi.org/10.48550/arXiv.2001.03024).
  • [3] P. Yu, Z. Xia, J. Fei, and Y. Lu, “A Survey on Deepfake Video Detection”, IET-Biometrics, vol. 10, no. 6, pp. 607–624, 2021 (http s://doi.org/10.1049/bme2.12031).
  • [4] D. Cozzolino, G. Poggi, and L. Verdoliva, “Extracting Camera-based Finger Prints for Video Forensics”, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Long Beach, USA, 2019.
  • [5] H.H. Nguyen, J. Yamagishi, and I. Echizen, “Capsule-forensics: Using Capsule Networks to Detect Forged Images and Videos”, in: 2019 IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, UK, pp. 2307–2311 , 2019 (https: //doi.org/10.1109/ICASSP.2019.8682602).
  • [6] D. Afchar, V. Nozick, J. Yamagishi, and I. Echizen, “MesoNet: A Compact Facial Video Forgery Detection Network”, in: 2018 IEEE International Workshop on Information Forensics and Security (WIFS), 2018 (https://doi.org/ 10.48550/arXiv.1809.00888).
  • [7] S.H. Silva et al., “Deepfake Forensics Analysis: An Explainable Hierarchical Ensemble of Weakly Supervised Models”, Forensic Science International: Synergy, vol. 4, art. no. 100217, 2022 (https: //doi.org/10.1016/j.fsisyn.2022.100217).
  • [8] S.S. Shet et al., “Deepfake Detection in Digital Media Forensics”, Global Transitions Proceedings, vol. 3, no. 1, pp. 74–79, 2022 (http s://doi.org/10.1016/j.gltp.2022.04.017).
  • [9] U.A. Ciftci, I. Demir, and L. Yin, “FakeCatcher: Detection of Synthetic Portrait Videos Using Biological Signals”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 10, 2020 (https://doi.org/10.1109/TPAMI.2020.3009287).
  • [10] E. Sabir et al., “Recurrent Convolutional Strategies for Face Manip- ulation Detection in Videos”, arXiv:1905. 00582v3, 2019 (https: //doi.org/10.48550/arXiv.1905.00582).
  • [11] FaceForensics. Database of FaceForensics++ [Online]. Available: https://github.com/ondyari/FaceForensics
  • [12] M. Massod et al., “Deepfakes Generation and Detection: State-of-the-art, Open Challenges, Countermeasures, and Way Forward”, Applied Intelligence, vol. 53 , pp. 3974 – 4026, 2022 (https://doi.org/ 10.1007/s10489-022-03766-z).
  • [13] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection”, in: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’ 05), San Diego, USA, 2005 (https://doi.org/10.1109/CVPR.2005.177).
  • [14] J.J.V. Hernandez, J.I. de la Rosa, G. Rodriguez, and J.L. Flores, “The 2nd Continuous Wavelet Transform: Applications in Fringe Pattern Processing for Optical Measurement Techniques”, in: Wavelet Theory and Its Applications, IntechOpen, pp. 173–193, 2018 (https://doi.org/10.5772/intechopen.74813).
  • [15] J. Brownlee, Deep Learning for Natural Language Processing. Develop Deep Learning Models for Your Natural Language Problems, Johns Hopkins University Press, Ebook, 372 p., 2018 (ISBN: 9781838550295).
  • [16] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, MIT Press, Massachusetts, 2016 (ISBN: 9780262035613).
  • [17] A. Krishevsky, I. Sutskever, and G.E. Hinton “ImageNet Classification with Deep Convolutional Neural Networks”, Communications of the ACM, vol. 60, no. 6, pp. 84– 90, 2017 (https://doi.org/ 10.1 145/3065386).
  • [18] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition”, 2015 (https://doi.org/ 10. 48550/arXiv.1512.03385).
  • [19] A.G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications”, 2017 (https://doi.org/10.48550/arXiv.1704.04861).
  • [20] G. Huang, Z. Liu, L. van der Maaten, and K.Q. Weinberger, “Densely Connected Convolutional Networks”, 2018 (https://doi.org/ 10.48550/arXiv.1608.06993).
  • [21] F.N. Iandola et al., “SqueezeNet: AlexNet-level Accuracy with 50 x Fewer Parameters and <0.5MB Model Size”, 2017 (https://doi. org/10.48550/arXiv.1602.07360).
  • [22] X. Zhang, X. Zhou, M. Lin, and J. Sun, “ShuffleNet: an Extremely Efficient Convolutional Neural Network for Mobile Devices”, 2017 (https://doi.org/10.48550/arXiv.1707.01083).
  • [23] Y. Zhao et al., “Capturing the Persistence of Facial Expression Features for Deep Fake Video Detection”, in: International Conference on Information and Communications Security, Beijing, China, pp. 630–645, 2019 (https://doi.org/ 10.1007/978- 3- 030- 41579- 2 _37).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-d9d958a9-9c76-4f5d-9483-05a588982797
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.