Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 7

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  facial expression recognition
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote Classifying and Visualizing Emotions with Emotional DAN
100%
EN
Classification of human emotions remains an important and challenging task for many computer vision algorithms, especially in the era of humanoid robots which coexist with humans in their everyday life. Currently proposed methods for emotion recognition solve this task using multi-layered convolutional networks that do not explicitly infer any facial features in the classification phase. In this work, we postulate a fundamentally different approach to solve emotion recognition task that relies on incorporating facial landmarks as a part of the classification loss function. To that end, we extend a recently proposed Deep Alignment Network (DAN) with a term related to facial features. Thanks to this simple modification, our model called EmotionalDAN is able to outperform state-of-the-art emotion classification methods on two challenging benchmark dataset by up to 5%. Furthermore, we visualize image regions analyzed by the network when making a decision and the results indicate that our EmotionalDAN model is able to correctly identify facial landmarks responsible for expressing the emotions.
EN
Modern psychology is increasingly interested in phenomena related to the flourishing of a human being, such as mindfulness or emotional intelligence (EI). Mindfulness, according to Kabat-Zinn, is “the awareness that emerges through paying attention on purpose, in the present moment, and nonjudgmentally to the unfolding of experience moment by moment” including the experience of emotions. The most widely studied EI concept was introduced by Salovey and Mayer. They defined it as the ability to monitor emotions and use this information to guide one’s thinking and actions. One of the skills involved in EI is the recognition of emotions based on facial expressions. Interestingly, there is no link between self-reported emotional intelligence, measured by a questionnaire, and the ability to recognize facial expressions measured by a task test. Mindful people are more attuned to their implicit emotions and can reflect this awareness in their explicit self--descriptions. The purpose of this study is to examine the relationships between mindfulness and emotional intelligence, and to examine the moderating role of mindfulness in the relationship between self-reported EI and the ability to recognize facial expressions. The participants were 120 students from different universities of Lublin, Poland, who completed the Mindful Attention Awareness Scale (MAAS) by Brown and Ryan as translated into Polish by Jankowski, the Schutte Self-Report Inventory as adapted into Polish by Jaworowska and Matczak (Kwestionariusz Inteligencji Emocjonalnej; INTE), and the Emotional Intelligence Scale – Faces (Skala Inteligencji Emocjonalnej – Twarze; SIE-T) developed by Matczak, Piekarska, and Studniarek. The results show a positive relationship of emotional intelligence with mindfulness. A positive correlation was also found between mindfulness and the recognition of emotions, which is a component of EI. There was no correlation between mindfulness and the other EI component – using emotional information to guide one’s thinking and actions. As expected, there was no relationship between self-reported EI and the ability to recognize facial expressions, but – contrary to expectations – mindfulness was not a moderator of this relationship.
EN
Facial expression recognition is an advanced step for Human Computer Interaction (HCI) systems. Recently, fuzzy techniques are used widely to solve the natural based problems in which ambiguity is an inherent matter. In this paper, a Genetic Algorithm as a novel heuristic process is modeled to optimize the performance of fuzzy system to recognize facial expression from images. In the proposed hybrid model the core of expression recognition system is a Mamdani-type fuzzy rule based system to recognize the emotions; also, a proposed Genetic Algorithm is used with the purpose of making better performance and parameter optimization to improve the accuracy and robustness of the system. Therefore, GA as a training technique sets the fuzzy membership functions under the adverse conditions. To evaluate the system performance, images from FG-Net (FEED) and Cohn-Kanade database were used to obtain the best function parameters. Results showed the hybrid model under the training process not only to increase the accuracy rate of emotion recognition but also to increase the validity of the model in adverse conditions.
PL
W artykule przedstawiono koncepcję automatycznej anotacji cech w obrazie twarzy, wykorzystującą markery. Zaproponowano metodę nanoszenia markerów na twarz, rejestrowanych następnie przez kamerę kolorową. Przy doborze markerów i ustalaniu ich topologii uwzględniono własności jednostek czynnościowych. Zaprojektowano i zaimplementowano metodę detekcji markerów oraz procedurę weryfikacji otrzymanych rezultatów. Przeprowadzono analizę wyników działania utworzonych algorytmów. Intencją autorów jest wykorzystanie opisanego rozwiązania w automatycznym tworzeniu referencyjnej bazy danych do trenowania i testowania algorytmów rozpoznawania jednostek czynnościowych.
EN
In this paper we present an approach to automatic annotation of face image features utilizing markers. Markers are located on adequate facial regions and are acquired using color camera. Location and topology of markers has been determined utilizing knowledge of facial morphology and facial action units. We design algorithm of marker detection, as well as method of results verification. Also, analysis of detection and verification algorithms results has been performed. The goal of our work is to use proposed approach to create facial database for evaluation facial expression recognition, without need of manual annotation of image features.
EN
In this paper KinectRecorder comprehensive tool is described which provides for convenient and fast acquisition, indexing and storing of RGB-D video streams from Microsoft Kinect sensor. The application is especially useful as a supporting tool for creation of fully indexed databases of facial expressions and emotions that can be further used for learning and testing of emotion recognition algorithms for affect-aware applications. KinectRecorder was successfully exploited for creation of Facial Expression and Emotion Database (FEEDB) significantly reducing the time of the whole project consisting of data acquisition, indexing and validation. FEEDB has already been used as a learning and testing dataset for a few emotion recognition algorithms which proved utility of the database, and the KinectRecorder tool.
PL
W pracy przedstawiono kompleksowe narzędzie, które pozwala na wygodną i szybką akwizycję, indeksowanie i przechowywanie nagrań strumieni RGB-D z czujnika Microsoft Kinect. Aplikacja jest szczególnie przydatna jako narzędzie wspierające tworzenie w pełni zaindeksowanych baz mimiki i emocji, które mogą być następnie wykorzystywane do nauki i testowania algorytmów rozpoznawania emocji użytkownika dla aplikacji je uwzględniających. KinectRecorder został z powodzeniem wykorzystany do utworzenia bazy mimiki i emocji FEEDB, znacznie skracając czas całego procesu, obejmującego akwizycję, indeksowanie i walidację nagrań. Baza FEEDB została już z powodzeniem wykorzystana jako uczący i testujący zbiór danych dla kilku algorytmów rozpoznawania emocji, co wykazało przydatność zarówno jej, jak również narzędzia KinectRecorder.
EN
Tht nature of computer vision causes the fact that not only computer science researchers are interested in it, but neuroscientists and psychologists, too. Ont of the main interests for psychology is identification of person's psychological traits and personality types which can be accomplished by different means of psychological testing: questionnaires, interviews, direct observations, etc. Though that is a general tendency of people to read character into a person's physical form, especially face. in relation to psychological characteristics recognition, face provides researchers and psychologists with instrument of obtaining information about personality and psychological traits that would be much more, objective than questionnaires and in iiropsychological tests and could be obtained remotely using person's facial portrait, with no need for personal, involvement The paper describes approaches to psychological characteristics recognition from facial image such us physiognomy, phase facial portrait, ophthalmogeometry, and explains the need in automating it.
7
Content available Acquisition of databases for facial analysis
63%
EN
This article describes guidelines and recommendations for acquisition of databases for facial analysis. New devices and methods for both face recognition and facial expression recognition are constantly developed. In order to evaluate these devices and methods, dedicated datasets are recorded. Acquisition of a database for facial analysis is not an easy task and requires taking into account multiple issues. Based on our experience with recording databases for facial expression recognition, we provide guidelines regarding the acquisition process. Multiple aspects of such process are discussed in this work, namely selection of sensors and data streams, design and structure of the database, technical aspects, acquisition conditions and design of the user interface. Recommendations how to address these aspects are provided and justified. An acquisition software, designed according to these guidelines, is also discussed. The software was used for recording an extended version of our previous facial expression recognition database and proved to both ensure correct data and be convenient for the recorded subjects.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.