Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl

PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
2022 | R. 98, nr 3 | 89--92
Tytuł artykułu

Visual emotion sensing using convolutional neural network

Wybrane pełne teksty z tego czasopisma
Warianty tytułu
PL
Wizualne wykrywanie emocji za pomocą splotowej sieci neuronowej
Języki publikacji
EN
Abstrakty
EN
The objective of this article is to present a CNN architecture relevant to the Interactive Emotional Dyadic Motion Capture (IEMOCAP). Since the database showed some issues during the training phase, we are using frames as inputs instead of video recorder to minimize the error and increase the accuracy. We apply the methodology of transfer learning by adjust the number of layers and the weight of the database. The results of the female and male genders are 91% and 89% respectively.
PL
Celem tego artykułu jest przedstawienie architektury CNN odpowiedniej do interaktywnego emocjonalnego przechwytywania ruchu (IEMOCAP). Ponieważ baza danych wykazała pewne problemy w fazie uczenia, używamy klatek jako danych wejściowych zamiast rejestratora wideo, aby zminimalizować błąd i zwiększyć dokładność. Stosujemy metodologię transferu uczenia się dostosowując liczbę warstw i wagę bazy danych. Wyniki dla płci żeńskiej i męskiej wynoszą odpowiednio 91% i 89%.
Wydawca

Rocznik
Strony
89--92
Opis fizyczny
Bibliogr. 17 poz., il., rys., tab.
Twórcy
autor
  • Signal Image and Information Technology(SITI) Laboratory, Department of Electrical Engineering, National Engineering School of Tunis, Campus Universitaire Farhat Hached el Manar BP 37, Le Belvedere 1002 TUNIS, souha.ayadi@enit.utm.tn
  • Signal Image and Information Technology(SITI) Laboratory, Department of Electrical Engineering, National Engineering School of Tunis, Campus Universitaire Farhat Hached el Manar BP 37, Le Belvedere 1002 TUNIS, zied.lachiri@enit.utm.tn
Bibliografia
  • [1] Octavio Arriaga, Matias Valdenegro-Toro, and Paul Plöger. Real-time convolutional neural networks for emotion and gender classification. arXiv preprint arXiv:1710.07557, 2017.
  • [2] Sarah Adel Bargal, Emad Barsoum, Cristian Canton Ferrer, and Cha Zhang. Emotionrecognition in the wild from videos using images. In Proceedings of the 18th ACM International Conference on Multimodal Interaction, pages 433–436, 2016.
  • [3] Marco Bellantonio, Mohammad A Haque, Pau Rodriguez, Kamal Nasrollahi, Taisi Telve, Sergio Escalera, Jordi Gonzalez, Thomas B Moeslund, Pejman Rasti, and Gholamreza Anbarjafari. Spatio-temporal pain recognition in cnn based super-resolved facial images. In Video Analytics. Face and Facial Expression Recognition and Audience Measurement, pages 151162. Springer, 2016.
  • [4] Kevin Brady, Youngjune Gwon, Pooya Khorrami, Elizabeth Godoy, William Campbell, Charlie Dagli, and Thomas S Huang.Multimodal audio, video and physiological sensor learning for continuous emotion prediction. In Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge, pages 97–104, 2016.
  • [5] Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan. Iemocap: Interactive emotional dyadic motion capture database. Language resources and evaluation, 42(4):335, 2008.
  • [6] Sayan Ghosh, Eugene Laksana, Louis-Philippe Morency, and Stefan Scherer. Representation learning for speech emotion recognition. In Interspeech, pages 3603–3607, 2016.
  • [7] Heysem Kaya, Furkan Gürpınar, and Albert Ali Salah. Video- based emotion recognition in the wild using deep transfer learning and score fusion. Image and Vision Computing, 65:66–75, 2017.
  • [8] Hong-Wei Ng, Viet Dung Nguyen, Vassilios Vonikakis, and Stefan Winkler. Deep learning for emotion recognition on small datasets using transfer learning. In Proceedings of the 2015 ACM on international conference on multimodal interaction, pages 443–449, 2015.
  • [9] Fatemeh Noroozi, Marina Marjanovic, Angelina Njegus, Sergio Escalera, and Gholamreza Anbarjafari. Audio-visual emotion recognition in video clips. IEEE Transactions on Affective Computing, 10(1):60–75, 2017.
  • [10] Siyang Song, Enrique Sánchez-Lozano, Mani Kumar Tellamekala, Linlin Shen, Alan Johnston, and Michel Valstar. Dynamic facial models for video-based dimensional affect estimation. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 0–0, 2019.
  • [11] Samarth Tripathi, Sarthak Tripathi, and Homayoon Beigi. Multi-modal emotion recognition on iemocap dataset using deep learning. arXiv preprint arXiv:1804.05788, 2018.
  • [12] Chung-Hsien Wu, Jen-Chun Lin, and Wen-Li Wei. Survey on audiovisual emotion recognition: databases, features, and data fusion strategies. APSIPA transactions on signal and information processing, 3, 2014.
  • [13] Mira Jeong, Byoung Chul Ko, Sooyeong Kwak, and Jae-Yeal Nam. Driver facial landmark detection in real driving situations. IEEE Transactions on Circuits and Systems for Video Technology, 28(10):2753–2767, 2017.
  • [14] Yanting Pei, Yaping Huang, Qi Zou, Xingyuan Zhang, and Song Wang. Effects of image degradation and degradation removal to cnn-based image classification.IEEE transactions on pattern analysis and machine intelligence, 2019.
  • [15] M Shamim Hossain and Ghulam Muhammad. Emotion recognition using deep learning approach from audio–visual emotional big data. Information Fusion, 49:69–78, 2019.
  • [16] Jie Wei, Xinyu Yang, and Yizhuo Dong. User-generated video emotion recognition based on key frames. Multimedia Tools and Applications, 80(9):14343–14361, 2021.
  • [17] Prashant Giridhar Shambharkar and MN Doja. Movie trailer classification using deer hunting optimization based deep convolutional neural network in video sequences. Multimedia Tools and Applications, 79(29):21197–21222, 2020.
Typ dokumentu
Bibliografia
Identyfikatory
Identyfikator YADDA
bwmeta1.element.baztech-d60d3517-b845-4651-af64-b7e934268d5b
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.