Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 92

Liczba wyników na stronie
first rewind previous Strona / 5 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  face recognition
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 5 next fast forward last
EN
We propose a Computer Vision and Machine Learning equipped model that secures the ATM from fraudulent activities by leveraging the use of Haar cascade (HRC) and Local Binary Pattern Histogram (LBPH) classifier for face detection and recognition correspondingly, which in turn detect fraud by utilizing features, like PIN and face recognition, help to identify and authenticate the user by checking with the trained dataset and trigger real-time alert mail if the user turns out to be unauthorized also. It does not allow them to log in into the machine, which resolves the ATM security issue. this system is evaluated on the dataset of real-world ATM camera feeds, which shows an accuracy of 90%. It can effectively detect many frauds, including identity theft and unauthorized access which makes it even more reliable.
2
Content available remote Smart meeting attendance checking based on a multi-biometric recognition system
EN
Multimodal biometric can address some of the restrictions of the unimodal biometric by the combination of multi-biometric information for the same person in the decision-making operation. In this regard, the development in deep learning technologies has been employed in the multimodal biometric system. The deep learning techniques in object detection, such as face recognition and voice identification, are become more popular. Meeting Attendance checking carry out a very important role in meeting management. The manual checking attendance such as calling names or sign-in sheets is time-consuming. Face recognition and voice identification can be applied for attendance checks based on deep learning techniques. This paper presents an automatic multimodal biometric attendance checking system using Convolutional Neural Networks (CNN). The system uses a known dataset for the meeting participants, to train the CNN algorithm with a known set of input data. A computer with a high-quality webcam is used during the meeting attendance check, the system detects the attender face and voice then compares it with the known dataset, whenever matched, the attendee’s name will be recorded in an excel file. The final result is an excel file with all attendance names. The result of the system shows that the proposed CNN architectures attained a high accuracy. Furthermore, this result could be beneficial in student attendance records, particularly in surveillance and person identification systems.
PL
Biometria multimodalna może rozwiązać niektóre ograniczenia biometrii unimodalnej poprzez połączenie informacji multibiometrycznych dotyczących tej samej osoby w operacji podejmowania decyzji. W związku z tym rozwój technologii głębokiego uczenia się został wykorzystany w multimodalnym systemie biometrycznym. Coraz popularniejsze stają się techniki uczenia głębokiego w wykrywaniu obiektów, takie jak rozpoznawanie twarzy i identyfikacja głosu. Sprawdzanie obecności na spotkaniach pełni bardzo ważną rolę w zarządzaniu spotkaniami. Ręczne sprawdzanie obecności, takie jak wywoływanie nazwisk lub arkusze logowania, jest czasochłonne. Rozpoznawanie twarzy i identyfikacja głosu mogą być stosowane do sprawdzania obecności w oparciu o techniki głębokiego uczenia się. W artykule przedstawiono automatyczny multimodalny biometryczny system sprawdzania obecności z wykorzystaniem Convolutional Neural Networks (CNN). System wykorzystuje znany zbiór danych dla uczestników spotkania, aby wytrenować algorytm CNN ze znanym zbiorem danych wejściowych. Podczas sprawdzania obecności na spotkaniu używany jest komputer z wysokiej jakości kamerą internetową, system wykrywa twarz i głos uczestnika, a następnie porównuje je ze znanym zestawem danych, po dopasowaniu nazwisko uczestnika zostanie zapisane w pliku Excel. Ostatecznym wynikiem jest plik Excela ze wszystkimi nazwami obecności. Wynik działania systemu pokazuje, że proponowane architektury CNN osiągnęły wysoką dokładność. Ponadto wynik ten może być korzystny w rejestrach obecności uczniów, zwłaszcza w systemach nadzoru i identyfikacji osób.
EN
Recognizing faces under various lighting conditions is a challenging problem in artificial intelligence and applications. In this paper we describe a new face recognition algorithm which is invariant to illumination. We first convert image files to the logarithm domain and then we implement them using the dual-tree complex wavelet transform (DTCWT) which yields images approximately invariant to changes in illumination change. We classify the images by the collaborative representation-based classifier (CRC). We also perform the following sub-band transformations: (i) we set the approximation sub-band to zero if the noise standard deviation is greater than 5; (ii) we then threshold the two highest frequency wavelet sub-bands using bivariate wavelet shrinkage. (iii) otherwise, we set these two highest frequency wavelet sub-bands to zero. On obtained images we perform the inverse DTCWT which results in illumination invariant face images. The proposed method is strongly robust to Gaussian white noise. Experimental results show that our proposed algorithm outperforms several existing methods on the Extended Yale Face Database B and the CMU-PIE face database.
PL
Tematyka pracy obejmuje sposób realizacji aplikacji mobilnych z usługami rozpoznawania twarzy (ang. face recognition) w chmurze obliczeniowej oraz sposoby wykorzystania takich rozwiązań. Opisane zostały popularne platformy chmurowe mające w swojej ofercie usługę rozpoznawania twarzy. W pracy przedstawiono również etap projektowania oraz tworzenia aplikacji. Po jej utworzeniu została przetestowana funkcjonalność na różnych zdjęciach. W podsumowaniu wyszczególniono główne wady i zalety aplikacji oraz przedstawiono wnioski dotyczące podejmowanego tematu.
EN
The subject of this work includes the method of implementing mobile applications with face recognition services in the computing cloud and the ways of using such solutions. Popular cloud platforms that offer a facial recognition service were described. The next part of the work presents the application’s design stage. After its implementation, the functionality was tested in various photos. The summary lists the main advantages and disadvantages of the application as well as conclusions on the topic under consideration.
EN
The paper considers the problem of increasing the generalization ability of classification systems by creating an ensemble of classifiers based on the CNN architecture. Different structures of the ensemble will be considered and compared. Deep learning fulfills an important role in the developed system. The numerical descriptors created in the last locally connected convolution layer of CNN flattened to the form of a vector, are subjected to a few different selection mechanisms. Each of them chooses the independent set of features, selected according to the applied assessment techniques. Their results are combined with three classifiers: softmax, support vector machine, and random forest of the decision tree. All of them do simultaneously the same classification task. Their results are integrated into the final verdict of the ensemble. Different forms of arrangement of the ensemble are considered and tested on the recognition of facial images. Two different databases are used in experiments. One was composed of 68 classes of greyscale images and the second of 276 classes of color images. The results of experiments have shown high improvement of class recognition resulting from the application of the properly designed ensemble.
6
Content available remote Face recognition technology using the fusion of local descriptors
EN
Local phase quantization (LPQ) descriptor, first introduced by Ojansivu and Heikkila (2008), has successfully been applied in face recognition systems. In this paper, we combine local intensity area descriptor (LIAD), which was first introduced by Tran (2017), with LPQ descriptor to develop robust face recognition systems using LPQ descriptor. Face images were first encoded by LIAD as a noise and dimensionality reduction step. After that, the resulting images were presented through LPQ as a feature extraction step. A nearest neighbor method with chi-square measure is used in classification. Two famous datasets (the ORL Database of Faces and FERET) were used in experiments. The results confirmed that our proposed approach reached mean recognition accuracies that are 0.17\% ÷ 7.7\% better compared to five conventional descriptors (LBP, LDP, LDN, LTP, and LPQ).
EN
The model of smart door lock using face recognition based on hardware is the Jetson TX2 embedded computer proposed in this paper. In order to recognize the faces, face detection is a very important step. This paper studies and evaluates two methods of face detection, namely Histograms of Oriented Gradients (HOG) method which represents the approach using facial features and Multi-task Cascaded Convolutional Neural Networks method (MTCNN) represents using of deep learning and neural networks. To evaluate these two methods, the experimental model is used to verify the hardware platform, which is the Jetson TX2 embedded computer. The face angle parameter is used to rate the detection level and accuracy for each method. In addition, the experimental model also evaluates the speed of face detection from the camera of these methods. Experimental results show that the average time for face detection by HOG and MTCNN method are respectively 0.16s and 0.58s. For face-to-face frames, both methods detect very well with an accuracy rate of 100\%. However, with various face angles of 30o, 60o, 90o, the MTCNN method gives more accurate results, which is also consistent with published studies. The smart door lock model uses the MTCNN face detection method combined with the Facenet algorithm along with a data set of 200 images for 1 face with accuracy of 99\%.
8
Content available Rgb-D face recognition using LBP-DCT algorithm
EN
Face recognition is one of the applications in image processing that recognizes or checks an individual's identity. 2D images are used to identify the face, but the problem is that this kind of image is very sensitive to changes in lighting and various angles of view. The images captured by 3D camera and stereo camera can also be used for recognition, but fairly long processing times is needed. RGB-D images that Kinect produces are used as a new alternative approach to 3D images. Such cameras cost less and can be used in any situation and any environment. This paper shows the face recognition algorithms’ performance using RGB-D images. These algorithms calculate the descriptor which uses RGB and Depth map faces based on local binary pattern. Those images are also tested for the fusion of LBP and DCT methods. The fusion of LBP and DCT approach produces a recognition rate of 97.5% during the experiment
EN
Empathy is an important social ability in early childhood development. One of the significant characteristics of children with autism spectrum disorder (ASD) is their lack of empathy, which makes it difficult for them to understand other's emotions and to judge other's behavioral intentions, leading to social disorders. This research designed and implemented a facial expression analysis system that could obtain and analyze the real-time expressions of children when viewing stimulus, and evaluate the empathy differences between ASD children and typical development children. The research results provided new ideas for evaluation of ASD children, and helped to develop empathy intervention plans.
10
Content available remote CNN application in face recognition
EN
The paper presents application of the convolutional neural network (CNN) in face recognition. The CNN is regarded nowadays as the most efficient tool in image analysis. This technique was applied to recognition of two databases of faces: the own base containing 68 classes of very different variants of face composition (grey images) and 244 classes of color face images represented as RGB images (MUCT data base). This paper will compare different solutions of classifiers applied in CNN, autoencoder and the traditional approach relying on classical feature generation methods and application of support vector machine classifier. The numerical results of experiments performed on the face image database will be presented and discussed.
PL
Praca przedstawia zastosowanie sieci CNN w rozpoznaniu obrazów twarzy. Twarze poddane eksperymentom pochodzą z dwu baz danych. Jedna z nich jest własną bazą zawierającą 68 klas reprezentowanych w postaci obrazów w skali szarości i drugą (MUCT) zawierającą 244 klasy reprezentujące obrazy kolorowe RGB. Zbadano i porównano różne metody rozpoznania obrazów. Jedna z nich polega na zastosowaniu konwolucyjnej sieci neuronowej CNN z dwoma różnymi klasyfikatorami końcowymi (softmax i SVM). Inne głębokie podejście stosuje autoenkoder do generacji cech i SVM jako klasyfikator. Wyniki porównano z klasycznym podejściem wykorzystującym transformację PCA w połączeniu z klasyfikatorem SVM.
PL
W artykule przedstawiono wyniki oryginalnych badań nad zastosowaniem sieci neuronowej wykorzystującej techniki głębokiego uczenia w zadaniu identyfikacji tożsamości na podstawie obrazów twarzy zarejestrowanych w zakresie widzialnym i w podczerwieni. W badaniach użyte zostały obrazy twarzy eksponowanych w zmiennych ale kontrolowanych warunkach. Na podstawie uzyskanych wyników można stwierdzić, że oba badane zakresy spektralne dostarczają istotnych ale różnych informacji o tożsamości danej osoby, które się wzajemnie uzupełniają.
EN
The paper presents the results of the original research on the application of a neural network using deep learning techniques in the task of identity recognition on the basis of facial images acquired in both visual and thermal radiation ranges. In the research, the database containing images acquired in various but controlled conditions was used. On the basis of the obtained results it can be established that both investigated spectral ranges provide distinctive and complementary details about the identity of an examined person.
EN
Biometric databases are important components that help improve the performanceof state-of-the-art recognition applications. The availability of more andmore challenging data is attracting the attention of researchers, who are systematicallydeveloping novel recognition algorithms and increasing the accuracyof identification. Surprisingly, most of the popular face datasets (like LFW orIJBA) are not fully unconstrained. The majority of the available images werenot acquired on-the-move, which reduces the amount of blurring that is causedby motion or incorrect focusing. Therefore, the COMPACT database for studyingless-cooperative face recognition is introduced in this paper. The datasetconsists of high-resolution images of 108 subjects acquired in a fully automatedmanner as people go through the recognition gate. This ensures that the collecteddata contains real-world degradation factors: different distances, expressions,occlusions, pose variations, and motion blur. Additionally, the authorsconducted a series of experiments that verified the face-recognition performanceon the collected data.
EN
Numerous algorithms have met complexity in recognizing the face, which is invariant to plastic surgery, owing to the texture variations in the skin. Though plastic surgery serves to be a challenging issue in the domain of face recognition, the concerned theme has to be restudied for its hypothetical and experimental perspectives. In this paper, Adaptive Gradient Location and Orientation Histogram (AGLOH)-based feature extraction is proposed to accomplish effective plastic surgery face recognition. The proposed features are extracted from the granular space of the faces. Additionally, the variants of the local binary pattern are also extracted to accompany the AGLOH features. Subsequently, the feature dimensionality is reduced using principal component analysis (PCA) to train the artificial neural network. The paper trains the neural network using particle swarm optimization, despite utilizing the traditional learning algorithms. The experimentation involved 452 plastic surgery faces from blepharoplasty, brow lift, liposhaving, malar augmentation, mentoplasty, otoplasty, rhinoplasty, rhytidectomy and skin peeling. Finally, the proposed AGLOH proves its performance dominance.
EN
Although the unimodal biometric recognition (such as face and palmprint) has higher convenience, its security is also relatively weak. The recognition accuracy is easy affected by many factors such as ambient light and recognition distance etc. To address this issue, we present a weighted multimodal biometric recognition algorithm with face and palmprint based on histogram of contourlet oriented gradient (HCOG) feature description. We employ the nonsubsampled contour transform (NSCT) to decompose the face and palmprint images, and the HOG method is adopted to extract the feature, which is named as HCOG feature. Then the dimension reduction process is applied on the HCOG feature and a novel weight value computation method is proposed to accomplish the multimodal biometric fusion recognition. Extensive experiments illustrate that our proposed weighted fusion recognition can achieve excellent recognition accuracy rates and outmatches the unimodal biometric recognition methods.
EN
Since the plastic surgery should consider that facial impression is always dependent on current facial emotion, it came to be verified how precise classification of facial images into sets of defined facial emotions is.
EN
The paper presents application of the convolutional neural network (CNN) in face recognition. Data bases of faces have been represented by the visible and thermal infra-red images. The CNN is regarded nowadays as the most efficient tool in image analysis. This technique was applied to recognition of 50 classes of face images represented in visual and infrared imagery. This approach will be compared to the traditional approach relying on classical feature generation methods and application of support vector machine classifier. The numerical results of experiments performed on the face image data base will be presented and discussed.
PL
Praca przedstawia porównanie metod rozpoznawania twarzy przy zastosowaniu konwolucyjnych sieci neuronowych (CNN) i klasycznego podejścia opartego na specjalistycznych metodach generacji cech diagnostycznych. Twarze są reprezentowane w postaci 2 rodzajów obrazów: widzialnego oraz w podczerwieni. Zbadano i porównano dwa podejścia do analizy obrazów. Jeden polega na zastosowaniu konwolucyjnej sieci neuronowej łączącej w jednym systemie generację nienadzorowaną cech diagnostycznych i klasyfikację. Drugie, klasyczne podejście, rozdzielające obie części przetwarzania. Generacja cech odbywa się poprzez zastosowanie specjalistycznych metod (tutaj PCA, KPCA i tSNE), a klasyfikacja wykorzystuje te cechy jako sygnały wejściowe dla oddzielnego klasyfikatora SVM. Wyniki eksperymentów numerycznych zostały przedstawione i porównane na bazie 50 różnych obrazów twarzy stworzonych w różnych warunkach oświetlenia i akwizycji.
EN
In the present paper, we deal with the application of the probabilistic approach, which makes it possible to optimize the face image classification task. The mathematical expectations and variances of the investigated random parameters are used as basic statistics. The proposed method allows us to carry out a fast and reliable preliminary classification and to exclude obviously dissimilar face image from the further analysis.
PL
W niniejszej pracy mamy do czynienia z zastosowaniem podejścia probabilistycznego, które pozwala zoptymalizować zadanie klasyfikacji obrazów twarzy. Wartości oczekiwane i wariancje badanych parametrów losowych są stosowane jako podstawowe statystyki. Zaproponowana metoda pozwala przeprowadzić szybką i właściwą wstępną klasyfikację i wykluczyć bardzo odmienne obrazy twarzy z dalszej analizy.
EN
This study describes linear discriminant analysis (LDA) algorithm. The most significant aspects of the algorithm are discussed and properly tested with an application made in C++ specifically for this study. Quantitative study revealed many different advantages about the LDA algorithm.
PL
W pracy przeanalizowano możliwości wykorzystania spektrum termowizyjnego w detekcji pieszych i ich biometrycznej identyfikacji na podstawie zdjęć twarzy. Obraz termowizyjny prezentujący relatywnie duży kontrast cieplny pozwala na dokładniejszą ekstrakcję pieszych z otoczenia w stosunku do obrazowania w paśmie widzialnym. Zaproponowana w pracy metoda segmentacji z wykorzystaniem progu globalnego (Otsu) i techniki rozszerzania regionów osiąga bardzo wysoką skuteczność ekstrakcji obszarów zainteresowania (do 98%) przy krótkim czasie obliczeń (31ms). Technika ta generuje także stosunkowo niewielką liczbę próbek do klasyfikacji (średnio 8,6 próbki na obraz). Jednocześnie zarejestrowany termowizyjny obraz twarzy jest indywidualny dla każdego człowieka, a przy okazji jest niewrażliwy na zmiany warunków oświetlenia, co pozwala na stabilną identyfikację nawet w warunkach nocnych. Potwierdzają to eksperymenty przeprowadzone w oparciu o trzy różne techniki identyfikacji twarzy na dwóch bazach twarzy zarejestrowanych kamerą kolorową i termowizyjną. Proponowane rozwiązanie może być wykorzystane w systemach monitoringu do wyszukiwania i rozpoznawania osób, np. przy zagrożeniach terrorystycznych.
EN
The paper presents an analysis of applicability of the thermal imaging for pedestrian detection and their biometric verification based on face images. The infrared image offers a relatively high thermal contrast and therefore it allows easier extraction of the pedestrians from the background than the typical visible light imaging. The proposed method of segmentation uses Otsu global threshold and region enlargement technique. It achieves high efficiency of the extraction of regions of interest (up to 98%) and short computation time (31 ms). Moreover, it generates a relatively small number of samples for the classification step (in average 8.6 sample per image). The additionally registered thermo facial images are individual for every human and insensitive for changes of the lighting conditions. It allows a reliable identification of people, even at night. These observations were confirmed in experiments performed with three various identification techniques on two databases of faces registered with the color camera and the thermal camera. The proposed solution can be used in monitoring systems for searching and recognition of persons, e.g. in terrorist threats.
EN
Over the past few years, a huge increase in the number of various computer vision applications can be observed. These are widely used in such areas as video surveillance, medical diagnostics, biometrics recognition, and the automotive and military industries. Most of these solutions take advantage of high-resolution cameras in order to obtain high-quality images. Surprisingly, little attention is paid in the literature to the practical implementation of off- the-shelf image acquisition systems. Most of the available solutions are composed of custom-developed electronic devices that use specialized multi-core DSPs and/or FPGA technology. Therefore, a novel realization of a scalable and comprehensive image acquisition system based on synchronized high-resolution Gigabit Ethernet cameras is presented in this paper. The proposed solution allows for the connection of multiple cameras along with any number of external illumination modules. The selected devices can be synchronized with each other in user-defined configurations; hence, a designed solution can be easily integrated in both simple and complex applications. The authors describe the design and implementation processes of the proposed platform in detail. The performance issues that can occur in such systems are presented and discussed. The obtained results are encouraging and useful for the development of similar solutions.
first rewind previous Strona / 5 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.