PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Extensive feature set approach in facial expression recognition in static images

Autorzy
Treść / Zawartość
Identyfikatory
Warianty tytułu
Konferencja
National Conference on Robotics (12, 12-16.2012, Świeradów-Zdrój, Poland)
Języki publikacji
EN
Abstrakty
EN
The article presents the preliminary concept of facial emotion recognition system. The approach focuses on the feature extraction and selection process which simplifies the stage of defining and adding new elements to the feature set. The evaluation of the system was performed with two discriminant analysis classifiers, decision tree classifier and four variants of k-nearest neighbors classifier. The system recognizes seven emotions. The verification step utilizes two databases of face images representing laboratory and natural conditions. Personal and interpersonal emotion recognitin was evaluated. The best quality of classification for personal emotion recognition was achieved by 1NN classifier, the recognition rate was 99.9% for the laboratory conditions and 97.3 for natural conditions. For interpersonal emotion recognition the rate was 82.5%.
Słowa kluczowe
Twórcy
  • Wroclaw University of Technology, Institute of Computer Engineering, Control and Robotics
Bibliografia
  • [1] G.B. Duchenne de Bologne, Mechanisme de la Physionomie Humaine. Paris, Jules Renouard Libraire 1862.
  • [2] C. Darwin, The Expression of the Emotions in Man and Animals. Anniversary edition, Harper Perennial 1872/2009.
  • [3] P. Ekman, Emotion in the Human Face. Cambridge University Press 1982.
  • [4] P. Ekman, W. Friesen, Facial Action Coding System: A technique for the measurement of facial movement.
  • Palo Alto, Consulting Psychologists Press 1978.
  • [5] M. Pantic, L. Rothkrantz, ”Expert system for automatic analysis of facial expressions”, Image and Vision Computing Journal, 2000, vol. 18, no. 11, pp. 881–905.
  • [6] Y. Tian, T. Kanade, J. Cohn, Facial Expression Recognition. Handbook of Face Recognition, 2nd ed. 2011, Springer
  • [7] M. Pantic, M. Barlett, ”Machine Analysis of Facial Expressions”, In: Face Recognition. I-Tech Education and Publishing, 2007, pp. 377–416.
  • [8] P. Viola, M. Jones, ”Robust real-time face detection”, International Journal of Computer Vision, 2004, vol. 57, pp. 137–154.
  • [9] X. Li, Q. Ruan;, Y. Ming, ”3D Facial expression recognition based on basic geometric features”, 2010 IEEE 10th International Conference on Signal Processing (ICSP), 2010.
  • [10] I. Kotsia, I. Pitas, ”Facial expression recognition in image sequences using geometric deformation features and Support Vector Machines”, IEEE Trans Image Process, vol. 16, no. 1, 2007, p. 172–187.
  • [11] C. Lien, L. Lin; C. Tu, A New Appearance-Based Facial Expression Recognition System with Expression Transition Matrices. Innovative Computing Information and Control, 2008.
  • [12] M. Pantic, L. Rothkrantz, ”Automatic analysis of facial expressions: The state of the art”, IEEE Trans. Pattern Anal. Mach. Intell., December, 2000, vol. 22, vo. 12, pp. 1424–1445.
  • [13] B. Fasel, J. Luettin, ”Automatic facial expression analysis: A survey”, Pattern Recognition, 1999, vol. 36, no. 1, pp. 259–275.
  • [14] M. Żarkowski, ”Facial emotion recognition in static image”. In: 12 National Conference on Robotics, Świeradów-Zdrój, 2012, vol. 2, pp. 705–714 (in Polish).
  • [15] T. F. Cootes et al. ”Active Shape Models – their training and application”, Comput. Vis. Image Underst., January 1995, vol. 61, no. 1, pp. 38–59.
  • [16] T. Cootes, G. Edwards, C. Taylor. ”Active Appearance Models”, IEEE Trans. Pattern Anal. Mach. Intell., June, 2001, vol. 23, no. 6, pp. 681–685.
  • [17] J. Saragih, FaceTracker, http://web.mac.com/jsaragih/FaceTracker/FaceTracker.html
  • [18] Y. Tian, T. Kanade, J. Cohn. Recognizing action units for facial expression analysis IEEE Trans. Pattern Anal. Mach. Intell., 23(2), 2001, pp. 1–19.
  • [19] J. Saragih, S. Lucey, J. Cohn, ”Face alignment through subspace constrained mean-shifts. In: ICCV. Proceedings, 2009, pp. 1034–1041.
  • [20] J. Saragih, S. Lucey, J. Cohn, ”Deformable model fitting by regularized landmark mean-shift”. International
  • Journal of Computer Vision, 2011, nol. 91, no. 2, pp. 200–215.
  • [21] Weka 3: Data Mining Software in Java, http://www.cs.waikato.ac.nz/ml/weka/
  • [22] M.A. Hall, Correlation-based Feature Subset Selection for Machine Learning. PhD thesis, University of Waikato, Hamilton, New Zealand, 1998.
  • [23] Ł. Juszkiewicz, Speech emotion recognition for a social robot. Master thesis, Wrocław University of Technology, Wrocław, 2011 (in Polish).
  • [24] M. Żarkowski, Set of procedures helping through the learning process of using EMG signals for control. Master thesis, Wrocław University of Technology, Wrocław, 2011 (in Polish).
  • [25] N. Aifanti, C. Papachristou, A. Delopoulos, ”The MUG facial expression database”. In: 11th Int. Workshop on Image Analysis for Multimedia Interactive Services. Proceedings, Desenzano, Italy, April 12–14, 2010.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-ea9b83dc-f6a1-4586-8834-ce205f66f805
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.