In recent years, research in automated facial expression recognition has attained significant attention for its potential applicability in human–computer interaction, surveillance systems, animation, and consumer electronics. However, recognition in uncontrolled environments under the presence of illumination and pose variations, low-resolution video, occlusion, and random noise is still a challenging research problem. In this paper, we investigate recognition of facial expression in difficult conditions by means of an effective facial feature descriptor, namely the directional ternary pattern (DTP). Given a face image, the DTP operator describes the facial feature by quantizing the eight-directional edge response values, capturing essential texture properties, such as presence of edges, corners, points, lines, etc. We also present an enhancement of the basic DTP encoding method, namely the compressed DTP (cDTP) that can describe the local texture more effectively with fewer features. The recognition performances of the proposed DTP and cDTP descriptors are evaluated using the Cohn–Kanade (CK) and the Japanese female facial expression (JAFFE) database. In our experiments, we simulate difficult conditions using original database images with lighting variations, low-resolution images obtained by down-sampling the original, and images corrupted with Gaussian noise. In all cases, the proposed method outperforms some of the well-known face feature descriptors.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.