Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!

Znaleziono wyników: 4

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  emotion detection
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Nowadays, Twitter is one of the most popular microblogging sites that is generating a massive amount of textual data. Such textual data is intended to incorporate human feelings and opinions with related events like tweets, posts, and status updates. It then becomes difficult to identify and classify the emotions from the tweets due to their restricted word length and data diversity. In contrast, emotion analysis identifies and classifies different emotions based on the text data generated from social media platforms. The underlying work anticipates an efficient category and prediction technique for analyzing different emotions from textual data collected from Twitter. The proposed research work deliberates an enhanced deep neural network (EDNN) based hierarchical Bi-LSTM model for emotion analysis from textual data; that classifies the six emotions mainly sadness, love, joy, surprise, fear, and anger. Furthermore, the emotion analysis result obtained by the proposed hierarchical Bi-LSTM model is being compared and validated with the traditional hybrid CNN-LSTM approach regarding the accuracy, recall, precision, and F1-Score. It can be observed from the results that the proposed hierarchical Bi-LSTM achieves an average accuracy of 89% for emotion analysis, whereas the existing CNN-LSTM model achieved an overall accuracy of 75%. This result shows that the proposed hierarchical Bi-LSTM approach achieves desired performance compared to the CNN-LSTM model.
2
Content available remote REGA: Real-Time Emotion, Gender, Age Detection Using CNN - A Review
EN
In this paper we describe a methodology and an algorithm to estimate the real-time age, gender, and emotion of a human by analyzing of face images on a webcam. Here we discuss the CNN based architecture to design a real-time model. Emotion, gender and age detection of facial images in webcam play an important role in many applications like forensics, security control, data analysis,video observation and human-computer interaction. In this paper we present some method \& techniques such as PCA,LBP, SVM, VIOLA-JONES, HOG which will directly or indirectly used to recognize human emotion, gender and age detection in various conditions.
EN
Various automated/semi-automated medical diagnosis systems based on human physiology have been gaining enormous popularity and importance in recent years. Physiological features exhibit several unique characteristics that contribute to reliability, accuracy and robustness of systems. There has also been significant research focusing on detection of conventional positive and negative emotions after presenting laboratory-based stimuli to participants. This paper presents a comprehensive survey on the following facets of mental stress detection systems: physiological data collection, role of machine learning in Emotion Detection systems and Stress Detection systems, various evaluation measures, challenges and applications. An overview of popular feature selection methods is also presented. An important contribution is the exploration of links between biological features of humans with their emotions and mental stress. The numerous research gaps in this field are highlighted which shall pave path for future research.
EN
Machine recognition of human emotional states is an essential part in improving man-machine interaction. During expressive speech the voice conveys semantic message as well as the information about emotional state of the speaker. The pitch contour is one of the most significant properties of speech, which is affected by the emotional state. Therefore pitch features have been commonly used in systems for automatic emotion detection. In this work different intensities of emotions and their influence on pitch features have been studied. This understanding is important to develop such a system. Intensities of emotions are presented on Plutchik's cone-shaped 3D model. The k Nearest Neighbor algorithm has been used for classification. The classification has been divided into two parts. First, the primary emotion has been detected, then its intensity has been specified. The results show that the recognition accuracy of the system is over 50% for primary emotions, and over 70% for its intensities.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.