Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 8

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  wykrywanie twarzy
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Counting and detecting occluded faces in a crowd is a challenging task in computer vision. In this paper, we propose a new approach to face detection-based crowd estimation under significant occlusion and head posture variations. Most state-of-the-art face detectors cannot detect excessively occluded faces. To address the problem, an improved approach to training various detectors is described. To obtain a reasonable evaluation of our solution, we trained and tested the model on our substantially occluded data set. The dataset contains images with up to 90 degrees out-of-plane rotation and faces with 25%, 50%, and 75% occlusion levels. In this study, we trained the proposed model on 48,000 images obtained from our dataset consisting of 19 crowd scenes. To evaluate the model, we used 109 images with face counts ranging from 21 to 905 and with an average of 145 individuals per image. Detecting faces in crowded scenes with the underlying challenges cannot be addressed using a single face detection method. Therefore, a robust method for counting visible faces in a crowd is proposed by combining different traditional machine learning and convolutional neural network algorithms. Utilizing a network based on the VGGNet architecture, the proposed algorithm outperforms various state-of-the-art algorithms in detecting faces ‘in-the-wild’. In addition, the performance of the proposed approach is evaluated on publicly available datasets containing in-plane/out-of-plane rotation images as well as images with various lighting changes. The proposed approach achieved similar or higher accuracy.
2
EN
Military service is undoubtedly among the most profound forms of service to the nation. With military service young people can develop qualities of discipline within them, but nobody should be forced to serve, and especially young children. A real-time Child Troopers detection surveillance system is built to overcome these bad acts, based on Convolutional Neural Networks (CNNs). This method is focused on the automatic face, age, and weapon detection. The proposed detection and identification system consist of many steps of process: starting with, a pre-trained deep learning model based on SSD-ResNet network to perform face detection operation. Then, an age estimation using VGG-Face model is performed, finally, a weapon detection based on MobileNetV2-SSD pretrained model. The results of these steps are combined to look for children under 18 years old with guns in the images. These models have been selected because of there fast and accurate in infering to integrate network for detecting and identifying children with weapons in images. The experimental result using global datasets of various images for faces and weapons showed that the use of this method enhances the accuracy level of detection.
PL
Dzięki służbie wojskowej młodzi ludzie mogą rozwinąć w sobie cechy dyscypliny, ale nikt nie powinien być zmuszany do służby, a zwłaszcza małe dzieci. Zaproponowano jest system nadzoru wykrywający w czasie rzeczywistym Child Troopers, oparty na Convolutional Neural Networks (CNN). Ta metoda skupia się na automatycznym wykrywaniu twarzy, wieku i broni. Proponowany system detekcji i identyfikacji składa się z wielu etapów procesu: zaczynając od wstępnie wytrenowanego modelu głębokiego uczenia opartego na sieci SSD-ResNet do wykonywania operacji wykrywania twarzy. Następnie przeprowadzana jest estymacja wieku za pomocą modelu VGG-Face, a na koniec detekcja broni w oparciu o wstępnie wytrenowany model MobileNetV2-SSD. Wyniki tych kroków są łączone w celu wyszukania na zdjęciach dzieci poniżej 18 roku życia z bronią. Modele te zostały wybrane ze względu na szybkie i dokładne wnioskowanie do integracji sieci do wykrywania i identyfikacji dzieci z bronią na obrazach. Wyniki eksperymentalne wykorzystujące globalne zbiory danych różnych obrazów twarzy i broni wykazały, że zastosowanie tej metody zwiększa poziom dokładności wykrywania
3
Content available remote Wizerunek twarzy w identyfikacji i weryfikacji tożsamości
PL
Identyfikatory naturalne są najstarszym i zarazem najdynamiczniej rozwijanym środkiem weryfikacji tożsamości człowieka. Rozwój ten dotyczy zwłaszcza zaawansowanych technik biometrycznych z elementami sztucznej inteligencji. W artykule zostały przedstawione - na tle innych środków identyfikacji człowieka - podstawowe zasady weryfikacji tożsamości na podstawie wizerunku twarzy.
EN
Natural means of identification are the oldest and most dynamically developed ways of verifying human identity. Their development concerns in particular advanced biometric techniques with elements of artificial intelligence. This article presents - against the background of other means of human identification - the basic principles used to verify identity based on the image of the face.
4
Content available remote REGA: Real-Time Emotion, Gender, Age Detection Using CNN - A Review
EN
In this paper we describe a methodology and an algorithm to estimate the real-time age, gender, and emotion of a human by analyzing of face images on a webcam. Here we discuss the CNN based architecture to design a real-time model. Emotion, gender and age detection of facial images in webcam play an important role in many applications like forensics, security control, data analysis,video observation and human-computer interaction. In this paper we present some method \& techniques such as PCA,LBP, SVM, VIOLA-JONES, HOG which will directly or indirectly used to recognize human emotion, gender and age detection in various conditions.
5
Content available Silhouette Identification for Apparelled Bodies
EN
This paper presents an approach to identify apparel silhouettes. A feature region of the human face was first proposed for conducting face detection in fashion pictures with the AdaBoost method, and the head was then located with its positional relation to the facial feature region. The linear relationship between the ratio of the body height to head length and the length of the lower body was ensured by restricting the RBH to a specific range. Under this condition, the apparelled body was divided into several parts, and the boundary of apparel on the lower body was determined considering the influence of the hemline. Based on the widths of the body parts and the apparel on the lower body, shape factors were established to express the extent to which the apparel silhouette approached a certain shape. A computer program was developed for implementation and demonstrated high accuracy in silhouette identification of an appareled body.
PL
Przedstawiono próbę identyfikacji sylwetki ubranego modelu. Opracowano system właściwości charakteryzujących twarz człowieka dla możliwości dalszego wyodrębnienia twarzy ze zdjęć modeli. Zastosowano metodę AdaBoost. Umożliwiło to usytuowanie głowy w stosunku do innych elementów ubioru. Zidentyfikowano liniowe zależności pomiędzy wysokościami całości sylwetki, jej dolnej części, i głowy. Na tej podstawie ubrana sylwetka człowieka była dzielona na szereg części i określano granice ubioru w stosunku do dolnej części sylwetki, uwzględniając wpływ dolnej krawędzi ubioru. Opierając się na szerokościach poszczególnych części ciała człowieka i ubioru dolnej części sylwetki, ustalono czynniki kształtu dla możliwości kwalifikacji sylwetki do odpowiedniego typu. Opracowano program komputerowy umożliwiający dużą dokładność identyfikacji sylwetki ubranego modelu.
EN
The paper presents an algorithm enabling a fully automatic detection of characteristic areas on thermograms containing patients' faces in a front projection. A resolution of problems occurring at the segmentation of face images, such as a change of position, orientation and scale, has been proposed. In addition, attempts to eliminate the effect of the background and of disturbances caused by the haircut and the hairline were made. The algorithm may be used to detect selected points and areas of a face or as a preliminary component in the face recognition, as a development of optical analysis methods or in the quantitative analysis of face on thermograms.
EN
This paper presents the possibilities of applying the Support Vector Machines (SVM) in the process of automatic human face recognition. It is described how the existing methods of face recognition can be improved by the SVM. Moreover, a new approach to the multi-method fusion utilising the SVM is proposed. Usefulness of all the methods described in the paper improving the face recognition effectiveness by the SVM is confirmed by the experimental results.
EN
Mainstream automatic speech recognition has focused almost exclusively on the acoustic signal. The performance of these systems degrades considerably in the real word in the presence of noise. It was needed novel approaches that use other orthogonal sources of information to the acoustic input that not only considerably improve the performance in severely degraded conditions, but also are independent to the type of noise and reverberation. Visual speech is one such source not perturbed by the acoustic environment and noise. In this paper, it was presented own approach to lip-tracking for audio-visual speech recognition system. It was presented video analysis of visual speech for extraction visual features from a talking person in color video sequences. It was developed a method for automatically face, eyes, lip's region, lip's corners and lip's contour de-tection. Finally, the paper will show results of lip-tracking depending on various factors (lighting, beard).
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.