PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Face emotional states mapping based on the rigid bone model

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
This paper addresses the problem of mapping human emotions to the three-dimensional human face model. The mapping scheme exploits the rigid bone model based on markers measurement. Positions of 24 landmarks and their movements were captured from photos of faces in different emotional states. The initial positions of markers were selected according to the Facial Action Coding System (FACS) and human head anatomy properties. Finally, the obtained model with six emotional states has been used in subjective test to evaluate the emotion perception accuracy.
Rocznik
Strony
47--60
Opis fizyczny
Bibliogr. 28 poz.
Twórcy
autor
  • West Pomeranian University of Technology, Department of Control and Measurement, Gen. Sikorskiego 37, 70-313 Szczecin, Poland, maja.kocon@zut.edu.pl
Bibliografia
  • [1] Colmenarez, A. J., Xiong, Z., and Huang, T. S., Facial analysis from continuous video with applications to human computer interface, Kluwer Academic Publishers, 2004.
  • [2] Fragopanagos, N. and Taylor, J. G., Emotion recognition in human-computer interaction. Neural networks : the offcial journal of the International Neural Network Society, Vol. 18, No. 4, 2005, pp. 389-405.
  • [3] Vidrascu, L. and Devillers, L., Real-Life Emotion Representation and Detection in Call Centers Data, Lecture Notes in Computer Science, Vol. 3784, 2005, pp. 739-746.
  • [4] Breazeal, C., Designing Sociable Robots, MIT Press, 2002.
  • [5] Fleming, B. and Dobbs, D., Animating Facial Features and Expressions, Charles River Media, 2001.
  • [6] Lee, W.-S., Goto, T., and Magnenat-Thalmann, N., Cloning, Morphing, then Tracking Real Emotions, Proc. in Annual Conference, Sienna, Italy, 1999, pp. 20-22.
  • [7] Furniss, M., Motion Capture: An Overview, Animation Journal, 2000, pp. 68-82.
  • [8] Zhang, L., Snavely, N., Curless, B., and Seitz, S. M., Spacetime Faces: High Resolution Capture for Modeling and Animation, Proceedings of ACM SIGGRAPH, Vol. 23, 2004.
  • [9] Theobald, B.-J. andWilkinson, N., A Real-Time Speech-Driven Talking Head using Active Appearance Models, In Proceedings of Auditory-Visual Speech Processing (AVSP), 2007, pp. 264-269.
  • [10] Wen, Z. and Huang, T. S., 3D Face Processing: Modeling, Analysis and Synthesis, Springer, 2004.
  • [11] Weiss, B., Kuhnel, C., Wechsung, I., Fagel, S., and Moller, S., Quality of talking heads in different interaction and media contexts, Speech Communication, Vol. 52, 2010, pp. 481-492.
  • [12] Clavel, C., Plessier, J., and Jean-Claude Martin, Laurent Ach, B. M., Combining Facial and Postural Expressions of Emotions in a Virtual Character, Published in: Proceeding IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents, 2009.
  • [13] Buisine, S., Abrilian, S., Niewiadomski, R., Martin, J.-C., Devillers, L., and Pelachaud, C., Perception of Blended Emotions: From Video Corpus to Expressive Agent, In Proceedings of the 6th International Conference on Intelligent Virtual Agents (IVA'06), Vol. 4133, 2006, pp. 93-106.
  • [14] Paul Ekman, W. V. F. and Hager, J. C., Facial Action Coding System. The Manual., Research Nexus division of Network Information Research Corporation, 2002.
  • [15] Hakura, J., Kashiwakura, M., Hiyama, Y., Kurematsu, M., and Fujita, H., Facial Expression Recognition and Synthesis toward Construction of Quasi-Personality, Proceedings of the 6th Conference on 6th WSEAS Int. Conf. on Artificial Intelligence, Knowledge Engineering and Data Bases, Vol. 6, 2007.
  • [16] Hahnel, M., Wiratanaya, A., and Kraiss, K.-F., Facial Expression Modelling from Still Images using a Single Generic 3D Head Model, Lecture Notes in Computer Science, Vol. 4174, 2006, pp. 324-333.
  • [17] Kobbelt, M. B. L., Real-Time Shape Editing using Radial Basis Functions, Proc. of Eurographics, 2005.
  • [18] Ekman, P., Facial Expressions, The Handbook of Cognition and Emotion, 1999, pp. 301-320.
  • [19] Kanade, T., Tian, Y., and Cohn, J. F., Comprehensive Database for Facial Expression Analysis, IEEE Computer Society, 2000.
  • [20] Kuderle, T. B., Muscle-Based Facial Animation, Ph.D. thesis, University Of Applied Science Wedel (FH), 2005.
  • [21] Fehrenbach, M. J. and Herring, S. W., Illustrated Anatomy of the Head and Neck, Saunders; 3 edition, 2006.
  • [22] Nair, P. and Cavallaro, A., 3-D Face Detection, Landmark Localization, and Registration Using a Point Distribution Model, Journal IEEE Transactions on Multimedia, Vol. 11, 2009.
  • [23] Farkas, L. G. and Munro, I. R., Antropometric facial proportions in medicine, Charles C Thomas, 1987.
  • [24] Knyazev, G. G., Bocharov, A. V., Slobodskaya, H. R., and Ryabichenko, T. I., Personality-linked biases in perception of emotional facial expressions, Personality and Individual Differences, Vol. 44, 2008, pp. 1093-1104.
  • [25] Schacht, A. and Sommer, W., Emotions in word and face processing: early and late cortical responses. Brain and cognition, Vol. 69, No. 3, 2009, pp. 538-50.
  • [26] Enlow, D. H. and Hans, M. G., Essentials of Facial Growth, W. B. Saunders Company, 1996.
  • [27] Parke, F. I. and Waters, K., Computer Facial Animation, AK Peters; Second edition, 2008.
  • [28] S.Pandzic, I. and Forchheimer, R., Mpeg-4 Facial Animation. The Standard, Implementation and Applications, Wiley, 2008.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-LOD7-0029-0076
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.