PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
  • Sesja wygasła!
Tytuł artykułu

Emotion Analysis from Speech of Different Age Groups

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Konferencja
The Second International Conference on Research in Intelligent and Computing in Engineering
Języki publikacji
EN
Abstrakty
EN
This Recognition of speech emotion based on suitable features provides age information that helps the society in different ways. As the length and shape of human vocal tract and vocal folds vary with age of the speaker, the area remains a challenge. Emotion recognition system based on speaker's age will help criminal investigators, psychologists and law enforcement agencies in dealing with different segments of the society. Particularly child psychologists, counselors can take timely preventive measures based on such recognition system. The area remains further complex since the recognition system trained for adult users performs poorer when it involves children. This has motivated the authors to move in this direction. A novel effort is made in this work to determine the age of speaker based on emotional speech prosody and clustering them using fuzzy c-means algorithm. The results are promising and we are able to demarcate the emotional utterances based on age.
Rocznik
Tom
Strony
283--287
Opis fizyczny
Bibliogr 24 poz., rys., tab., wykr.
Twórcy
  • Department of Electronics and Communication Engineering, Siksha „O‟ Anusandhan University Bhubaneswar, Odisha, India
  • Department of Electronics and Communication Engineering, Siksha „O‟ Anusandhan University Bhubaneswar, Odisha, India
  • Department of Electronics and Communication Engineering, Birla Institute Technology, Mesra, Ranchi, India
Bibliografia
  • 1. A. Hämäläinen, H. Meinedo, M. Tjalve, P. Pellegrini, I. Trancoso, and M. S. Dias, “Improving speech recognition through automatic selection of age group-Specific Acoustic Models,” adfa, Springer, pp. 1, 2011.
  • 2. D. C. Tanner, and M. E. Tanner, “Forensic aspects of speech patterns: voice prints, speaker profiling, lie and intoxication detection,” Lawyers & Judges Publishing, 2004.
  • 3. E. Lyakso, O. Frolova, E. Dmitrieva, A. Grigorev, H. Kaya, A. A. Salah, and A. Karpov, “EmoChildRu: emotional child Russian speech corpus,” Speech and Computer, 17th International Conference, SPECOM 2015, Athens, Greece, pp. 144-152, 20-24 Sept. 2015.
  • 4. M. Feld, F. Burkhardt, and C. Müller, “Automatic speaker age and gender recognition in the car for tailoring dialog and mobile services,” In proc. Interspeech, Japan, pp. 2834-2837, 2010.
  • 5. R. Porat, D. Lange, and Y. Zigel, “Age recognition based on speech signals using weights supervector,” In proc. Interspeech, Japan, pp. 2814-2817, 2010.
  • 6. S. J. Chaudhari, and R. M. Kagalkar, “Automatic speaker age estimation and gender dependent emotion recognition,” International Journal of Computer Applications, vol. 117, no. 17, pp. 5-10, May 2015.
  • 7. J. Kaur, and S. Vashish, “Analysis of different clustering techniques for detecting human emotions variation through data mining,” International Journal of Computer Science Engineering and Information Technology Research (IJCSEITR), vol. 3, iss. 2, pp. 27-36, Jun. 2013.
  • 8. I. Trabelsi, D. B. Ayed, and N. Ellouze, “Comparison between GMM-SVM sequence kernel and GMM: application to speech emotion recognition,” Journal of Engineering Science and Technology, 2016 (to be published).
  • 9. V. M. M. Maca, J. P. Espada, V. G. Diaz, and V. B. Semwal, “Measurement of viewer sentiment to improve the quality of television and interactive content using adaptive content,” 2016 International conference on electrical, electronics, and optimization techniques (ICEEOT), pp. 4445-4450, http://dx.doi.org/10.1109/ICEEOT 2016. 7755559.
  • 10. V. B. Semwal, J. Singha, P. K. Sharma, A. Chauhan, and B. Behera, “An optimized feature selection technique based on incremental feature analysis for bio-metric gait data classification,” Multimedia Tools and Applications, pp. 1-19, http://dx.doi.org/10.1007/s11042-016-4110-y, Dec. 2016.
  • 11. P. Kumari, and V. Abhishek, “Information-theoretic measures on intrinsic mode function for the individual identification using EEG sensors,” IEEE Sensors Journal, vol. 15, no. 9, pp. 4950-4960, Sep. 2015.
  • 12. H. K. Palo, and M. N. Mohanty, “Classification of emotions of angry and disgust,” Smart Computing Review, vol. 5, no. 3, pp. 151-158, Jun. 2015.
  • 13. S. G. Koolagudi, and K. S. Rao, “Emotion recognition from speech: a review,” International Journal of Speech Technology, springer, vol. 15, pp. 99-117, 2012.
  • 14. M. E. Ayadi, M. S. Kamel, and F. Karray, “Survey on speech recognition: resources, features and methods,” Pattern Recognition, vol. 44, pp. 572-587, 2011.
  • 15. B. J. Benjamin, “Frequency variability in the aged voice,” Journal of Gerontology, vol. 36, no. 6, pp. 722-726, 1981.
  • 16. B. Das, S. Mandal, P. Mitra, and A. Basu, “Effect of aging on speech features and phoneme recognition: a study on Bengali voicing vowels,” International Journal of Speech Technology, vol. 16, iss. 1, Springer, pp. 19-31, Mar. 2013.
  • 17. R. Winkler, “Influences of pitch and speech rate on the perception of age from voice,” Published in Proceeding of ICPhS, Saarbrücken, pp. 1849-1852, 6-10 August 2007.
  • 18. H. K. Palo, and M. N. Mohanty, “Performance analysis of emotion recognition from speech using combined prosodic features,” Advanced Science Letters, vol. 22, no. 2, pp. 288-293 (6), Feb. 2016.
  • 19. L R. Rabiner, M. Cheng, A. Rosenberg, and C. McGonegal, “A comparative performance study of several pitch detection algorithms,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 24, no. 5, pp. 399-418, Oct. 1976.
  • 20. S. Paulmann, M. D. Pell, and S. A. Kotz, “How aging affects the recognition of emotional speech,” Brain and language, vol. 104, no. 3, pp. 262-269, Mar 2008.
  • 21. P. Laukka, J. Patrik, and B. Roberto, “A dimensional approach to vocal expression of emotion,” Cognition and emotion, vol. 19, no. 5, pp. 633-653, Aug. 2005.
  • 22. X. A. Rathina, K. M. Mehata, and M. Ponnavaikko, “Basic analysis on prosodic features in emotional speech,” International journal of computer science, engineering and applications (IJCSEA), vol. 2, no. 4, Aug. 2012.
  • 23. R. Banse, and K. R. Scherer, “Acoustic profiles in vocal emotion expression,” Journal of personality and social psychology, vol. 70, no. 3, pp. 614-636, Mar. 1996.
  • 24. D. A. Sauter, F. Eisner, A. J. Calder, and S. K. Scott, “Perceptual cues in nonverbal vocal expressions of emotion,” The quarterly journal of experimental psychology, vol. 63, no. 11, pp. 2251- 2272, Apr. 2010.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-5fca5b13-0ea2-40eb-ade4-85e1250394c9
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.