PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Classifying and Visualizing Emotions with Emotional DAN

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Classification of human emotions remains an important and challenging task for many computer vision algorithms, especially in the era of humanoid robots which coexist with humans in their everyday life. Currently proposed methods for emotion recognition solve this task using multi-layered convolutional networks that do not explicitly infer any facial features in the classification phase. In this work, we postulate a fundamentally different approach to solve emotion recognition task that relies on incorporating facial landmarks as a part of the classification loss function. To that end, we extend a recently proposed Deep Alignment Network (DAN) with a term related to facial features. Thanks to this simple modification, our model called EmotionalDAN is able to outperform state-of-the-art emotion classification methods on two challenging benchmark dataset by up to 5%. Furthermore, we visualize image regions analyzed by the network when making a decision and the results indicate that our EmotionalDAN model is able to correctly identify facial landmarks responsible for expressing the emotions.
Wydawca
Rocznik
Strony
269--285
Opis fizyczny
Bibliogr. 31 poz., fot., rys., tab.
Twórcy
  • Polish-Japanese Academy of Information Technology, Tooploox, Warsaw, Poland
  • Warsaw University of Technology, Tooploox, Warsaw, Poland
Bibliografia
  • [1] Ekman P, Friesen W. Facial Action Coding System: Investigators Guide. Consulting Psychologists Press, 1978.
  • [2] Xia XL, Xu C, Nan B. Facial Expression Recognition Based on TensorFlow Platform. In ITM Web of Conferences, 2017. doi:10.1051/itmconf/20171201005.
  • [3] Mollahosseini A, Chan D, Mahoor MH. Going deeper in facial expression recognition using deep neural networks. In IEEE Winter Conference on Applications of Computer Vision (WACV), 2016. doi:10.1109/WACV.2016.7477450.
  • [4] Benitez-Quiroz CF, Srinivasan R, Martinez AM. EmotioNet: An Accurate, Real-Time Algorithm for the Automatic Annotation of a Million Facial Expressions in the Wild. In CVPR, 2016. doi:10.1109/CVPR.2016.600.
  • [5] Kennedy B, Balint A. EmotionNet2. https://github.com/co60ca/EmotionNet.
  • [6] Mollahosseini A, Chan D, Mahoor MH. Going deeper in facial expression recognition using deep neural networks. 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), 2016. doi:10.1109/WACV.2016.7477450.
  • [7] Hasani B, Mahoor M. Facial expression recognition using enhanced deep 3D convolutional neural networks. IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017. doi:10.1109/CVPRW.2017.282.
  • [8] Kahou S, Michalski V, Konda K. Recurrent neural networks for emotion recognition in video. In Proceedings of the ACM on International Conference on Multimodal Interaction, 2015. doi:10.1145/2818346.2830596.
  • [9] Honari S, Molchanov P, Tyree S, Vincent P, Pal C, Kautz J. Improving Landmark Localization with Semi-Supervised Learning. In CVPR, 2018. doi:10.1109/CVPR.2018.00167.
  • [10] Kowalski M, Naruniec J, Trzcinski T. Deep Alignment Network: A convolutional neural network for robust face alignment. In CVPRW, 2017. doi:10.1109/CVPRW.2017.254.
  • [11] Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I. The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In CVPRW, 2010. doi:10.1109/CVPRW.2010.5543262.
  • [12] Happy SL, Patnaik P, Routray A, Guha R. The Indian Spontaneous Expression Database for Emotion Recognition. IEEE Transactions on Affective Computing, 2017. doi:10.1109/TAFFC.2015.2498174.
  • [13] Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. In ICCV, 2017. doi:10.1109/ICCV.2017.74.
  • [14] Zhao K, Chu WS, Torre F, Cohn JF, H Z. Joint Patch and Multi-label Learning for Facial Action Unit Detection. In CVPR, 2015. doi:10.1109/TIP.2016.2570550.
  • [15] Jaiswal S, Martinez B, Valstar M. Learning to combine local models for facial Action Unit detection. In IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, 2015. doi:10.1109/FG.2015.7284872.
  • [16] Shao Z, Liu Z, Cai J, Wu Y, Ma L. Facial Action Unit Detection Using Attention and Relation Learning. In CoRR, 2018.
  • [17] Shao Z, Liu Z, Cai J, Wu Y, Ma L. Deep Adaptive Attention for Joint Facial Action Unit Detection and Face Alignment. In ECCV, 2018. doi:10.1007/978-3-030-01261-843.
  • [18] Tian Y, Kanade T, Cohn J. Recognizing action units for facial expression analysis. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001. doi:10.1109/34.908962.
  • [19] Lopes AT, de Aguiar E, Oliveira-Santos T. A Facial Expression Recognition System Using Convolutional Networks. In SIBGRAPI, 2015. doi:10.1109/SIBGRAPI.2015.14.
  • [20] Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D. Going deeper with convolutions. In CVPR, 2015. doi:10.1109/CVPR.2015.7298594.
  • [21] Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. ImageNet: A Large-Scale Hierarchical Image Database. In: CVPR09. 2009. doi:10.1109/CVPR.2009.5206848.
  • [22] He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. CoRR, 2015. abs/1512.03385. doi:10.1109/CVPR.2016.90.
  • [23] Zafeiriou S, Trigeorgis G, Chrysos G, Deng J, Shen J. The Menpo Facial Landmark Localisation Challenge: A Step Towards the Solution. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 2017. doi:10.1109/CVPRW.2017.263.
  • [24] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. Computing Research Repository, 2014. doi:10.4236/jcc.2015.311023.
  • [25] Mollahosseini A, Hasani B, Mahoor MH. AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild. IEEE Transactions on Affective Computing, 2017. doi:10.1109/TAFFC.2017.2740923.
  • [26] Lyons, Akamatsu, Kamachi, Gyoba. The Japanese Female Facial Expressions Database. http://www.kasrl.org/jaffe.html.
  • [27] Zhang K, Zhang Z, Li Z, Qiao Y. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks. IEEE Signal Processing Letters, 2016. doi:10.1109/LSP.2016.2603342.
  • [28] Smith LN. Cyclical Learning Rates for Training Neural Networks. WACV, 2017. doi:10.1109/WACV.2017.58.
  • [29] Ekman P, Friesen W. Rationale and reliability for EMFACS Coders. Unpublished, 1982.
  • [30] Ahlberg J. CANDIDE-3 - An Updated Parameterised Face. 2001.
  • [31] Tautkute I, Trzcinski T, Bielski A. I Know How You Feel: Emotion Recognition with Facial Landmarks. In CVPRW, 2018. doi:10.1109/CVPRW.
Uwagi
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-abdd6551-c7e4-43ea-a635-09b2dac1d9c9
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.