Music has the ability to evoke different emotions in people, which is reflected in their physiological signals. Advances in affective computing have introduced computational methods to analyse these signals and understand the relationship between music and emotion in greater detail. We analyse Electrodermal Activity (EDA), Blood Volume Pulse (BVP), Skin Temperature (ST) and Pupil Dilation (PD) collected from 24 participants while they listen to 12 pieces from 3 different genres of music. A set of 34 features were extracted from each signal and 6 different feature selection methods were applied to identify useful features. Empirical analysis shows that a neural network (NN) with a set of features extracted from the physiological signals can achieve 99.2% accuracy in differentiating among the 3 music genres. The model also reaches 98.5% accuracy in classification based on participants’ subjective rating of emotion. The paper also identifies some useful features to improve accuracy of the classification models. Furthermore, we introduce a new technique called ’Gingerbread Animation’ to visualise the physiological signals we record as a video, and to make these signals more comprehensible to the human eye, and also appropriate for computer vision techniques such as Convolutional Neural Networks (CNNs). Our results overall provide a strong motivation to investigate the relationship between physiological signals and music, which can lead to improvements in music therapy for mental health care and musicogenic epilepsy reduction (our long term goal).
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.