Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
  • Sesja wygasła!
  • Sesja wygasła!

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
The paper describes the relations of speech signal representation in the layers of the convolutional neural network. Using activation maps determined by the Grad-CAM algorithm, energy distribution in the time–frequency space and their relationship with prosodic properties of the considered emotional utterances have been analysed. After preliminary experiments with the expressive speech classification task, we have selected the CQT-96 time–frequency representation. Also, we have used a custom CNN architecture with three convolutional layers in the main experimental phase of the study. Based on the performed analysis, we show the relationship between activation levels and changes in the voiced parts of the fundamental frequency trajectories. As a result, the relationships between the individual activation maps, energy distribution, and fundamental frequency trajectories for six emotional states were described. The results show that the convolutional neural network in the learning process uses similar fragments from time–frequency representation, which are also related to the prosodic properties of emotional speech utterances. We also analysed the relations of the obtained activation maps with time-domain envelopes. It allowed observing the importance of the speech signals energy in classifying individual emotional states. Finally, we compared the energy distribution of the CQT representation in relation to the regions’ energy overlapping with masks of individual emotional states. In the result, we obtained information on the variability of energy distributions in the selected signal representation speech for particular emotions.
EN
An analysis of low-level feature space for emotion recognition from the speech is presented. The main goal was to determine how the statistical properties computed from contours of low-level features influence the emotion recognition from speech signals. We have conducted several experiments to reduce and tune our initial feature set and to configure the classification stage. In the process of analysis of the audio feature space, we have employed the univariate feature selection using the chi-squared test. Then, in the first stage of classification, a default set of parameters was selected for every classifier. For the classifier that obtained the best results with the default settings, the hyperparameter tuning using cross-validation was exploited. In the result, we compared the classification results for two different languages to find out the difference between emotional states expressed in spoken sentences. The results show that from an initial feature set containing 3198 attributes we have obtained the dimensionality reduction about 80% using feature selection algorithm. The most dominant attributes selected at this stage based on the mel and bark frequency scales filterbanks with its variability described mainly by variance, median absolute deviation and standard and average deviations. Finally, the classification accuracy using tuned SVM classifier was equal to 72.5% and 88.27% for emotional spoken sentences in Polish and German languages, respectively.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.