Tytuł artykułu
Treść / Zawartość
Pełne teksty:
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
Communication atmosphere based on emotional states of humans and robots is modeled by using Fuzzy Atmosfield (FA), where the human emotion is estimated from bimodal communication cues (i.e., speech and gesture) using weighted fusion and fuzzy logic, and the robot emotion is generated by emotional expression synthesis. It makes possible to quantitatively express overall affective expression of individuals, and helps to facilitate smooth communication in humans-robots interaction. Experiments in a household environment are performed by four humans and five eye robots, where emotion recognition of humans based on bimodal cues achieves 84% accuracy in average, improved by about 10% compared to that using only speech. Experimental results from the model of communication atmosphere based on the FA are evaluated by comparing with questionnaire surveys, from which the maximum error of 0.25 and the minimum correlation coefficient of 0.72 for three axes in the FA confirm the validity of the proposal. In ongoing work, an atmosphere representation system is being planned for casual communication between humans and robots, taking into account multiple emotional modalities such as speech, gesture, and music.
Rocznik
Tom
Strony
52--63
Opis fizyczny
Bibliogr. 32 poz., rys.
Twórcy
autor
- Dept. C.I. & S.S., Tokyo Institute of Technology, G3-49, 4259 Nagatsuta, Midori-ku, Yokohama, Kanagawa 226-8502, Japan
autor
- Dept. C.I. & S.S., Tokyo Institute of Technology, G3-49, 4259 Nagatsuta, Midori-ku, Yokohama, Kanagawa 226-8502, Japan
autor
- Dept. C.I. & S.S., Tokyo Institute of Technology, G3-49, 4259 Nagatsuta, Midori-ku, Yokohama, Kanagawa 226-8502, Japan
autor
- Dept. C.I. & S.S., Tokyo Institute of Technology, G3-49, 4259 Nagatsuta, Midori-ku, Yokohama, Kanagawa 226-8502, Japan
autor
- School of Information Science and Engineering, Central South University, Yuelu Mountain, Changsha, Hunan 410083, China
autor
- School of Information Science and Engineering, Central South University, Yuelu Mountain, Changsha, Hunan 410083, China
autor
- Dept. E. E. & I. E., Kanto Gakuin University, 1-50-1 Mutsuura-higashi, Kanazawa-ku, Yokohama, Kanagawa 236-8501, Japan
Bibliografia
- [1] Z.-T. Liu, M. Wu et al., “Emotional states based 3-D Fuzzy Atmosfield for casual communication between humans and robots”. In: IEEE Int. Conf. on Fuzzy Systems, Taipei, Taiwan, 2011, pp. 777–782.
- [2] P. Rani, C. Liu et al., “An empirical study of machine learning techniques for affect recognition in human-robot interaction”, Pattern Analysis & Applications, vol. 9, no. 1, 2006, pp. 58–69.
- [3] D. Kuli´c and E. A. Croft, “Affective state estimation for human-robot interaction,” IEEE Trans. on Robotics, vol. 23, no. 5, 2007, pp. 991–1000.
- [4] P. Robbel, M. E. Hoque et al., “An integrated approach to emotional speech and gesture synthesis in humanoid robots”. In: Proc. of the Int. Workshop on Affective-Aware Virtual Agents and Social Robots, Boston, USA, 2009.
- [5] X. Li, B. MacDonald et al., “Expressive facial speech synthesis on a robotic platform”. In: Int. Conf. on Intelligent Robots and Systems, St. Louis, USA, 2009.
- [6] R. Taki, Y. Maeda et al., “Personal preference analysis for emotional behavior response of autonomous robot in interactive emotion communication”, Jounal of Advanced Computational Intelligence and Intelligent Informatics, vol. 14, no. 7, 2010, pp. 852–859.
- [7] Z.-T. Liu, F.-Y. Dong et al., “Proposal of Fuzzy Atmosfield for mood expression of human-robot communication”. In: Int. Symp. on Intelligent Systems, Tokyo, Japan, 2010.
- [8] Y. Yamazaki, Y. Hatakeyama et al. “Fuzzy inference based mentality expression for eye robot in Affinity Pleasure-Arousal space”, Jounal of Advanced Computational Intelligence and Intelligent Informatics, vol. 12, no. 3, 2008, pp. 304–313.
- [9] L. D. Riek, “Toward natural human-robot interaction exploring facial expression synthesis on an android robot”. In: Proc. of the Doctoral Consortium at the IEEE Conf. on Affective Computing and Intelligent Interaction, Amsterdam, Netherlands, 2009.
- [10] K. Hirota and F.-Y. Dong, “Development of Mascot Robot System in NEDO project”. In: Proc. 4th IEEE Int. Conf. Intelligent Systems, 2008, pp. 38–44.
- [11] H. A. Vu, Y. Yamazaki et al., “Emotion recognition based on human gesture and speech information using RT middleware”. In: IEEE Int. Conf. on Fuzzy Systems, Taipei, Taiwan, 2011, pp. 787–791.
- [12] Y.-K. Tang, H. A. Vu et al., “Multimodal gesture recognition for Mascot Robot System based on choquet integral using camera and 3D accelerometers fusion”, Journal of Advanced Computational Intelligence and Intelligent Informatics, vol. 5, no. 5, 2011, pp. 563–572.
- [13] Z.-T. Liu, Z. Mu et al., “Emotion recognition of violin music based on strings music theory for Mascot Robot System”. In: The 9th Int. Conf. on Informatics in Control, Automation and Robotics, Rome, Italy, 2012, pp. 5–14.
- [14] Y. Wang and L. Guan, “Recognizing human emotional state from audiovisual signals”, IEEE Transactions on Multimedia, vol. 10, no. 5, 2008, pp. 936–946.
- [15] M.-L. Song, M.-Y. You et al., “A robust multimodal approach for emotion recognition”, Neurocomputing, vol. 71, no. 10, 2008, pp. 1913–1920.
- [16] M.-J. Han, J.-H. Hsu et al., “A new information fusion method for bimodal robotic emotion recognition”, Journal of Computers, vol. 3, no. 7, 2008, pp. 39–47.
- [17] P. Ekman, “Are there basic emotions?”, Psychological Review, vol. 99, no. 3, 1992, pp. 550–553.
- [18] Z.-J. Chuang and C.-H. Wu, “Emotion recognition using acoustic features and textual content”. In: IEEE Int. Conf. on Multimedia and Expo, Taipei, Taiwan, 2004.
- [19] B. Schuller, G. Rigoll et al., “Speech emotion recognition combining acoustic features and linguistic information in a hybrid Support Vector Machine - belief network architecture”. In: IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Quebec, Canada, 2004.
- [20] A. Lee, T. Kawahara et al., “Recent development of open-source speech recognition engine Julius”. In: Proc. of Asia-Pacific Signal and Information Processing Association, Sapporo, Japan, 2009.
- [21] C. Wu, Z. Chuang et al., “Emotion recognition from text using semantic labels and separable mixture models”, ACM Transactions on Asian Language Information Processing, vol. 5, no. 2, 2006, pp. 165–182.
- [22] B. Schuller, S. Reiter et al., “ Speaker independent speech emotion recognition by ensemble classification”. In: IEEE Int. Conf. on Multimedia and Expo, Amsterdam, Netherlands, 2005.
- [23] D. Glowinski, A. Camurri et al., “Technique for automatic emotion recognition by body gesture analysis”. In: IEEE Computer Society Conf. on Computer Vision and Pattern Recognition,Anchorage, AK, USA, 2008.
- [24] J. Cao, H. Wang et al., “PAD model based facial expression analysis”, Advances in Visual Computing, Lecture Notes in Computer Science, vol. 5359, 2008, pp. 450–459.
- [25] J. Martı́nez-Miranda and A. Aldea, “Emotions in human and artificial intelligence”, Computers in Human Behavior, vol. 21, no.2, 2005, pp. 323–341.
- [26] S. Zhang, Z. Wu et al., “Facial expression synthesis using PAD emotional parameters for a Chinese expressive avatar”, Affective Computing and Intelligent Interaction, Lecture Notes in Computer Science, vol. 4738, 2007, pp. 24–35.
- [27] http://www.asha.org/public/hearing/noise/.
- [28] http://www.annelawrence.com/voicesurgery.htm.
- [29] Z.-T. Liu, M. Wu, D.-Y. Li, L.-F. Chen, F.-Y. Dong, Y. Yamazaki, and K. Hirota, “Concept of Fuzzy Atmosfield for Representing Communication Atmosphere and Its Application to Humans-Robots Interaction”, Journal of Advanced Computational Intelligence and Intelligent Informatics, vol. 17, no. 1, 2013, pp. 3–17.
- [30] M. Grimm and K. Kroschel, “Emotion estimation in speech using a 3D emotion space concept”, Robust Speech Recognition and Understanding,I-Tech Education and Publishing, 2007, pp. 281–300.
- [31] F. Michaud, A. Duquette et al., “Characteristics of mobile robotic toys for children with pervasive developmental disorders”. In: IEEE Int. Conf. on Systems, Man and Cybernetics, Washington, USA, 2003.
- [32] C.-Y. Chang, C.-Y. Lo et al., “A music recommendation system with consideration of personal emotion”. In: Int. Computer Symp., Tainan, Taiwan, 2010.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-b2e141b4-079d-473b-b281-77b4bcccb027