The presented study concerns development of a facial detection algorithm operating robustly in the thermal infrared spectrum. The paper presents a brief review of existing face detection algorithms, describes the experiment methodology and selected algorithms. For the comparative study of facial detection three methods presenting three different approaches were chosen, namely the Viola-Jones, YOLOv2 and Faster-RCNN. All these algorithms were investigated along with various configurations and parameters and evaluated using three publicly available thermal face datasets. The comparison of the original results of various experiments for the selected algorithms is presented.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
In this paper, we propose a method to local human faces in color image. For color images, the skin color is an important key feature for human face detection. Thus, this paper applied skin color information to segmented human face areas: first, we reduce the size of the color image, hold its ratio, and detect skin color pixels in the resized image based on the properties of skin color. Next, our method employs the detected pixels and chain code to locate continuous areas. Finally, we check the width and the height of each located area, and filter out the non-facial ones. The remaining areas the human face regions. To fully explore the efficiency and effectiveness of the proposed method, we conduct a lot of experiments on the test image used in other papers. The experimental results show that both the false positive and false negative rates are eiter equal to or better than those obtained in previous research results. Moreover, in most of experiments, the processing time of our method is shorter.
As a fundamental task in many applications, human face detection is one of the most active research field at present. Most of the research works are confined to visual light image. The potential for illumination invariant feature of the Infrared image has received little attention. The SARS epidemic resulted in the introduction of IR thermal camera for non-voluntary screening on human face for fever symptoms, but they are not working in an automatic way. This paper present an human face auto detection method based on SVM. The inherent consistency of SVM with the problem is discussed. A smart biometrics system that automatically detects human face in infrared video and performs temperature measurement is implemented. The potential for illumination invariant face recognition using thermal IR imagery is fully utilized.
In this paper an FPGA based embedded vision system for face detection is presented. The sliding detection window, HOG+SVM algorithm and multi-scale image processing were used and extensively described. The applied computation parallelizations allowed to obtain real-time processing of a 1280 × 720 @ 50Hz video stream. The presented module has been verified on the Zybo development board with Zynq SoC device from Xilinx. It can be used in a vast number of vision systems, including diver fatigue monitoring.
Long duration driving is a significant cause of fatigue related accidents of cars, airplanes, trains and other means of transport. This paper presents a design of a detection system which can be used to detect fatigue in drivers. The system is based on computer vision with main focus on eye blink rate. We propose an algorithm for eye detection that is conducted through a process of extracting the face image from the video image followed by evaluating the eye region and then eventually detecting the iris of the eye using the binary image. The advantage of this system is that the algorithm works without any constraint of the background as the face is detected using a skin segmentation technique. The detection performance of this system was tested using video images which were recorded under laboratory conditions. The applicability of the system is discussed in light of fatigue detection for drivers.
6
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Results of investigation of face detection algorithms efficiency in the banking client visual verification system are presented. The video recordings were made in real conditions met in three bank operating outlets employing a miniature industrial USB camera. The aim of the experiments was to check the practical usability of the face detection method in the biometric bank client verification system. The main assumption was to provide a simplified as much as possible user interaction with the application. Applied algorithms for face detection are described and achieved results of face detection in the real bank environment conditions are presented. Practical limitations of the application based on encountered problems are discussed.
Face detection which is a challenging problem in computer vision, can be used as a major step in face recognition. The challenges of face detection in color images include illumination differences, various cameras characteristics, different ethnicities, and other distinctions. In order to detect faces in color images, skin detection can be applied to the image. Numerous methods have been utilized for human skin color detection, including Gaussian model, rule-based methods, and artificial neural networks. In this paper, we present a novel neural network-based technique for skin detection, introducing a skin segmentation process for finding the faces in color images.
One of the simplest features used for the human face detection problem is the skin color information. A simple and relatively efficient histogram-based algorithm to segment skin pixels from a complex background is presented. The histogram-based algorithm used here is referred to as the lookup table (LUT) and is adopted to identify those intervals which may fall in the skin locus plane. For that purpose, a total of 306,401 skin samples are manually collected from RGB color images to calculate three lookup tables based on the relationship between each single pair of the three components (R, G, B). To estimate the skin locus boundary, a skin classifier box is created by integration of the proposed three heuristic rules based on how often each RGB pixel-relationship falls into its interval.
Two common channels through which humans communicate are speech and gaze. Eye gaze is an important mode of communication: it allows people tobetter understand each others’ intentions, desires, interests, and so on. The goal of this research is to develop a framework for gaze triggered events that can be executed on a robot and mobile devices and allows to perform experiments. We experimentally evaluate the framework and techniques for extracting gaze direction based on a robot-mounted camera or a mobile-device camera that are implemented in the framework. We investigate the impact of light on the accuracy of gaze estimation, and also how the overall accuracy depends on user eye and head movements. Our research shows that light intensity is important, and the placement of a light source is crucial. All the robot-mounted gaze detection modules we tested were found to be similar with regard to their accuracy. The framework we developed was tested in a human-robot interaction experiment involving a job-interview scenario. The flexible structure of this scenario allowed us to test different components of the framework in varied real-world scenarios, which was very useful for progressing towards our long-term research goal of designing intuitive gaze-based interfaces for human robot communication.
Ensuring safety requires the use of access control systems. Traditional systems typically use proximity cards. Modern systems use biometrics to identify the user. Using biological characteristics for identification ensures a high degree of safety. In addition, biological characteristics cannot be neither lost nor stolen. This paper presents proposals for the access control system Rusing face image. The system operates in real time using camera image.
12
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Principal component analysis (PCA) has various important applications, especially in pattern detection, such as face detection and recognition. In real-time applications, the response time must be as short as possible. In this paper, a new implementation of PCA for fast face detection is presented. Such implementation relies on performing cross-correlation in the frequency domain between the input image and eigenvectors (weights). Furthermore, this approach is developed to reduce the number of computation steps required by fast PCA. The "divide and conquer" principle is applied through image decomposition. Each image is divided into smaller-size sub-images, and then each of them is tested separately using a single fast PCA processor. In contrast to using only fast PCA, the speed-up ratio increases with the size of the input image when using fast PCA and image decomposition. Simulation results demonstrate that the proposed algorithm is faster than conventional PCA. Moreover, experimental results for different images show its good performance. The proposed fast PCA increases the speed of face detection, and at the same time does not affect the performance or detection rate.
Mainstream automatic speech recognition has focused almost exclusively on the acoustic signal. The performance of these systems degrades considerably in the real word in the presence of noise. It was needed novel approaches that use other orthogonal sources of information to the acoustic input that not only considerably improve the performance in severely degraded conditions, but also are independent to the type of noise and reverberation. Visual speech is one such source not perturbed by the acoustic environment and noise. In this paper, it was presented own approach to lip-tracking for audio-visual speech recognition system. It was presented video analysis of visual speech for extraction visual features from a talking person in color video sequences. It was developed a method for automatically face, eyes, lip's region, lip's corners and lip's contour de-tection. Finally, the paper will show results of lip-tracking depending on various factors (lighting, beard).
Artykuł opisuje nową technikę korekcji gamma wykorzystywaną do normalizacji oświetlenia przy detekcji twarzy. Proponowane przekształcenia umożliwiają zwiększenie kontrastu ciemnych części obrazu, dzięki czemu wykrycie oraz ^rozpoznanie twarzy staje się łatwiejsze.
EN
This paper presents a novel techniąue gamma correction, which is to compensate the variety of illumination in face detection. The suggested transformations allow increase the contrast of dark images what increase the face detection rate.
15
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
In this paper, a fast biometric system for personal identification through face recognition is introduced. In the detection phase, a fast algorithm for face detection is combined with cooperative modular neural networks (MNNs) to enhance the performance of the detection process. A simple design for cooperative modular neural networks is described to solve this problem by dividing the data into three groups. Furthermore, a new faster face detection approach is presented through image decomposition into many sub-images and applying cross correlation in frequency domain between each sub-image and the weights of the hidden layer. For the recognition phase, a new concept for rotation invariant based on Fourier descriptors and neural networks is presented. Although, the magnitude of the Fourier descriptors is translation invariant, there is no need for scaling or translation invariance. This is because the face sub-image (20 x 20 pixels) is segmented from the whole image during the detection process. The feature extraction algorithm based on Fourier descriptors is modified to reduce the number of neurons is the hidden layer. The second stage extracts wavelet coefficients of the resulted Fourier descriptors before application to neural network. The final vector is fed to a neural net for face classification. Moreover, a modified hierarchy soft decision tree of neural networks is introduced for face recignition. Compared with previous results, the proposed algorithm shown good performance on recognizing human faces with glass, bread, rotation, scaling, occlusion, noise, or change in illumination. Also, the response time is reduced.
For a driver monitoring system, one of the most important problems to solve is face detection rapidly. This paper presents an efficient approach to achieve fast and accurate face detection in gray level videos. Candidates of face at different scales are selected by finding regions based on Mask Transform (MT). To obtain real one, all the face candidates are then verified by using support vector machines (SVMs) based on Multi-scale 2D Walsh-Hadamard features. Head pose is estimated on the basis of accurate face detection. At last, we analyzed the head pose by a kind of Bilateral-projection Matrix Principle Component Analysis (BMPCA) algorithm proposed. Experimental results on many videos show that the algorithm can detect driver's face rapidly and estimate the head pose accurately. The proposed method is robust to deal with illumination changes, glasses wearing and different head pose with moderate rotations. Experimental results demonstrate its effectiveness.
W artykule przedstawiono koncepcję i projekt mikrosystemu do detekcji twarzy w obrazach cyfrowych z użyciem układu programowalnego SoC z rodziny Zynq firmy Xilinx [1]. Algorytm detekcji twarzy polega na wyodrębnieniu podstawowych cech twarzy i określeniu ich położenia w obrazie. Przedstawiono wyniki implementacji programowej w środowisku MATLAB/PC oraz implementacji sprzętowej. Obie implementacje przebadano pod względem złożoności oraz szybkości działania. W realizacji sprzętowej uzyskano porównywalną szybkość detekcji/lokalizacji twarzy i ponad 10-krotnie krótszy czas wyodrębniania cech twarzy.
EN
In this paper there is presented the design of an integrated microsystem for face detection in digital images, based on a new SoC Zynq from Xilinx [1]. Zynq is a new class of SoCs which combines an industry-standard ARM dual-core Cortex-A9 processing system with 28 nm programmable logic. This processor-centric architecture delivers a comprehensive platform that offers ASIC levels of performance and power consumption, the ease of programmability and the flexibility of a FPGA. The proposed algorithm for face detection operates on images having the resolution of 640x480 pixels and 24-bit color coding. It uses three-stage processing: normalization, face detection/location [2] and feature extraction. We implemented the algorithm in a twofold way: (1) using MATLAB/PC, and (2) hardware platform based on ZedBoard from Avnet [3] with Zynq XC7Z020 SoC. Both implementations were examined in terms of complexity and speed. The hardware implementation achieved a comparable speed of face detection/location but was over 10-times faster while extracting the features of faces in digital images. A significant speedup of feature extraction results from the parallelized architecture of a hardware accelerator for calculation of mouth and eyes locations. The proposed microsystem may be used in low-cost, mobile applications for detection of human faces in digital images. Since the system is equipped with the Linux kernel, it can be easily integrated with other mobile applications, including www services running on handheld terminals with the Android operating system.
The paper presents an algorithm for frontal face detection. At first a set of face candidates is selected based on ellipse detection with the Hough transform. Subsequently, every candidate is verified and for positive verification the detection precision is improved, which is particularly important for face recognition purposes. Results of conducted experiments, which are discussed in the paper, confirm high speed and effectiveness of the algorithm.
PL
Artykuł opisuje algorytm detekcji twarzy frontalnych. W pierwszym etapie przeprowadzana jest detekcja elips za pomocą transformaty Hougha, mająca na celu wyłonienie zbioru kandydatów na twarze. Następnie każdy z kandydatów jest weryfikowany i w przypadku pozytywnej weryfikacji stosowane są zabiegi podnoszące precyzję detekcji. W artykule zostały przedstawione wyniki badań eksperymentalnych potwierdzające wysoką skuteczność i szybkość działania algorytmu.
W artykule zaprezentowano algorytm umożliwiający w pełni automatyczną detekcję charakterystycznych obszarów na termogramach zawierających twarze pacjentów w projekcji przedniej. Algorytm prawidłowo wykrywa wymagane obszary niezależnie od położenia głowy w obrazie oraz od jej obrotu. Po prawidłowej detekcji jest przeprowadzany automatyczny pomiar wartości średniej, minimalnej i maksymalnej ich temperatury. W końcowej części artykułu zaprezentowano przykładowe zastosowanie metody do wstępnej detekcji typu i przebiegu bólu głowy.
EN
The algorithm enabling fully automatic detection of characteristic areas of the face on thermograms captured in the anterior projection is presented in the paper. Development and application of medical thermography is also discussed. There are given different types of headaches and methods for their analysis. Regions of: forehead (defined as CL,CP), eye-sockets (defined as OL,OP) and maxillary sinuses (defined as NL,NP) are assumed to be the areas medically essential for headache diagnosis. Thermograms were obtained from thermovision cameras AGEMA 590 and ThermaCam S65. The algorithm detects correctly the required head areas independently of the head position in the picture and its rotation within the range -50 to +50 degrees. Methods of mathematical morphology, active contour, template and Hough transform were used for the analysis. After the correct detection there was taken the automatic measurement of the area of the regions as well as their mean, minimum and maximal temperature. At the end of the paper there is presented an exemplary application of the algorithm for preliminary diagnosis of the type and the course of a headache. The results of segmentation of the face areas are given. The algorithm also makes it possible to analyse the given set of thermograms without necessity of modifying the operation parameters. The set of analysed images after adding translation and rotation includes above 4000 thermograms. The algorithm was developed and tested in Matlab environment.
In the article a specific biomedical requirements for the driver's psychophysical condition in terms of early warning system against excessive fatigue states was presented. In the first part of the paper selected driver's biomedical parameters were described, next the video image analysis algorithms in the context of eye movements were presented. In the last part a hardware solution for the non-invasive on-line measurements was proposed and the results obtained in practice were briefly characterized.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.