Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 21

Liczba wyników na stronie
first rewind previous Strona / 2 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  linear discriminant analysis
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 2 next fast forward last
EN
The aim of this paper is to demonstrate the effectiveness of newly developed fault detection methods based on a simple statistical approach encompassing linear discriminant analysis and signal processing. Fault prediction relates to the detection of: the type of operation of the medium voltage network, leakage (damaged insulator in the line string) and a measure of the distance of ground fault in an unbranched line, in a branched line and on its branches. The conducted research confirms the high efficiency of detection faults in all areas concerned.
PL
Celem pracy jest wykazanie skuteczności nowo opracowanych metod detekcji uszkodzeń opartych na prostym podejściu statystycznym obejmującym liniową analizę dyskryminacyjną i przetwarzanie sygnałów. Przeprowadzone badania potwierdzają wysoką skuteczność wykrywania uszkodzeń we wszystkich rozpatrywanych obszarach.
EN
Content-based image retrieval (CBIR) retrieves visually similar images from a dataset based on a specified query. A CBIR system measures the similarities between a query and the image contents in a dataset and ranks the dataset images. This work presents a novel framework for retrieving similar images based on color and texture features. We have computed color features with an improved color coherence vector (ICCV) and texture features with a gray-level co-occurrence matrix (GLCM) along with DWT-MSLBP (which is derived from applying a modified multi-scale local binary pattern [MS-LBP] over a discrete wavelet transform [DWT], resulting in powerful textural features). The optimal features are computed with the help of principal component analysis (PCA) and linear discriminant analysis (LDA). The proposed work uses a variancebased approach for choosing the number of principal components/eigenvectors in PCA. PCA with a 99.99% variance preserves healthy features, and LDA selects robust ones from the set of features. The proposed method was tested on four benchmark datasets with Euclidean and city-block distances. The proposed method outshines all of the identified state-of-the-art literature methods.
EN
The problem of a facial biometrics system was discussed in this research, in which different classifiers were used within the framework of face recognition. Different similarity measures exist to solve the performance of facial recognition problems. Here, four machine learning approaches were considered, namely, K-nearest neighbor (KNN), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), and Principal Component Analysis (PCA). The usefulness of multiple classification systems was also seen and evaluated in terms of their ability to correctly classify a face. A combination of multiple algorithms such as PCA+1NN, LDA+1NN, PCA+ LDA+1NN, SVM, and SVM+PCA was used. All of them performed with exceptional values of above 90% but PCA+LDA+1N scored the highest average accuracy, i.e. 98%.
EN
Fifty four domestically produced cannabis samples obtained from different USA states were quantitatively assayed by GC–FID to detect 22 active components: 15 terpenoids and 7 cannabinoids. The profiles of the selected compounds were used as inputs for samples grouping to their geographical origins and for building a geographical prediction model using Linear Discriminant Analysis. The proposed sample extraction and chromatographic separation was satisfactory to select 22 active ingredients with a wide analytical range between 5.0 and 1,000 µg/mL. Analysis of GC-profiles by Principle Component Analysis retained three significant variables for grouping job (Δ9-THC, CBN, and CBC) and the modest discrimination of samples based on their geographical origin was reported. PCA was able to separate many samples of Oregon and Vermont while a mixed classification was observed for the rest of samples. By using LDA as a supervised classification method, excellent separation of cannabis samples was attained leading to a classification of new samples not being included in the model. Using two principal components and LDA with GC–FID profiles correctly predict the geographical of 100% Washington cannabis, 86% of both Oregon and Vermont samples, and finally, 71% of Ohio samples.
5
Content available remote Desert seismic random noise reduction based on LDA effective signal detection
EN
At present, the seismic exploration of mineral resources such as unknown oil fields and natural gas fields has become the focus and difficulty. The Tarim Oilfield located in the desert area of northwest China has many uncertainties due to complicated geological structure and resource burial conditions. And the seismic record collected carries various noises, especially random noise with complex features, including non-stationary, non-Gaussian, nonlinear and low frequency. The seismic events are contaminated by random noise. Also the effective signal of desert seismic record is in the same frequency band as the random noise. These situations have brought great difficulties in denoising by conventional methods. In this paper, a noise reduction framework based on linear discriminant analysis effective signal detection in desert seismic record is proposed to solve this problem. At first, the method utilizes the difference between the effective signals and the noise in the low-dimensional space. The seismic data are divided into the effective signal cluster and the noise cluster. Then, the effective signal is extracted to realize the position of the seismic events. Finally, the conventional filter is matched to obtain better denoising results. The framework is applied to synthetic desert seismic records and real desert seismic records. The experimental results show that denoising capability after detecting effective signals is obviously better than those of conventional denoising methods. The accuracy of the seismic effective signal detection is higher, and the seismic events’ continuity is maintained better.
EN
This study describes linear discriminant analysis (LDA) algorithm. The most significant aspects of the algorithm are discussed and properly tested with an application made in C++ specifically for this study. Quantitative study revealed many different advantages about the LDA algorithm.
PL
W artykule przedstawiono koncepcję zwiększenia poziomu bezpieczeństwa obiektu dzięki zastosowaniu identyfikacji biometrycznej. Opracowana metoda to autorska implementacja liniowej analizy dyskryminacyjnej (ang. LDA) na potrzeby identyfikacji tożsamości w oparciu o obraz twarzy. Zaprezentowane w referacie badania zastosowania metody, zostały przeprowadzone w warunkach laboratoryjnych zbliżonych do warunków rzeczywistych.
EN
This paper presents a method for increasing the level of technical object security through biometric identification. The developed method is an original implementation of Linear Discriminant Analysis (LDA) for identity identification based on facial image. Presented in the pages of the article method has been tested in laboratory conditions similar to those real ones.
8
Content available Uogólniony liniowy klasyfikator Fishera
PL
W literaturze wielokrotnie omawiano klasyfikatory obrazów o rozkładach normalnych. Na ogół, kiedy dwie klasy są znacznie oddalone od siebie, to ich separację można przeprowadzić za pomocą jednej hiperpłaszczyzny. W artykule rozpatrywane są przypadki trudne, kiedy rozkłady znacznie nachodzą na siebie. Aby błąd klasyfikacji był wówczas mniejszy, do rozdzielenia klas lepiej użyć dwóch niż jednej płaszczyzny. Na początku został opisany algorytm, który bada i wyznacza liczbę przecięć dwóch funkcji Gaussa jednej zmiennej dla różnych przypadków. Potem algorytm ten został włączony do algorytmu uczenia i klasyfikacji dla zadania dwuklasowego. Następnie został on uogólniony do zadań wieloklasowych. Przeprowadzone eksperymenty na płaszczyźnie dla zadań trudnych, gdy liczba klas L = 2, 3, 4 wykazały, że zaproponowany algorytm dawał lepsze wyniki niż algorytm klasyczny z jedną płaszczyzną rozdzielającą.
EN
Bayesian classifiers for normal distribution patterns have often been discussed in literature. In general, when two classes are considerably apart from each other, they can be separated with a single plane. In this paper we will exam-ine some difficult cases, i.e. when their distributions significantly overlap. In such cases, to minimize the classification error, it is better to use two planes instead of one to separate the classes. At the beginning, the paper describes an algorithm used to investigate and determine the number of intersections of two Gaussian functions for different cases. Further in the article, this algorithm is included in the learning and classification algorithm for a two-class task. Then the algorithm is generalized for multi-class tasks. The experiments carried out on a plane for difficult tasks, when the number of classes L = 2, 3, 4, show that the proposed algorithm produces better results than the conventional algorithm with one separating plane.
9
Content available remote Ocena parametrów analizy akustycznej w detekcji patologii mowy
PL
Diagnostyka stanu dróg głosowych wymaga stworzenia wektora, który składa się z różnych parametrów akustycznych, co może pomóc w szybkiej oraz automatycznej detekcji patologii głosu. W niniejszym artykule przedstawiono wektor cech złożony z 31 parametrów. Parametry mowy zostały wyodrębnione w dziedzinie czasu, częstotliwości oraz cepstralnej. Wybór parametrów niezbędnych w ocenie patologii głosu został potwierdzony w analizie głównych składowych, jądrowej analizie głównych składowych (kernel PCA) oraz liniowej analizie dyskryminacyjnej (LDA).
EN
The diagnosis of the current state of the vocal tract requires the creation of a feature vector that consists of various acoustic parameters, which can help in rapid and automatic detection of voice pathologies. Vector consisting of 31 parameters was done in this project. Speech parameters were extracted in the time, frequency and cepstral domain. Essential parameters were selected and analysed using principal component analysis, kernel principal component analysis and linear discriminant analysis.
PL
Głównym celem artykułu jest porównanie skuteczności klasyfikacji cech dwóch algorytmów klasyfikujących wykorzystywanych w interfejsach mózg-komputer: SVM (ang. Support Vector Machine, Maszyna Wektorów Nośnych) oraz LDA (ang. Linear Discriminant Analysis, Liniowa Analiza Dyskryminacyjna). W artykule przedstawiono interfejs, w którym użytkownikowi prezentowane są dwa bodźce migające z różną częstotliwością (10 i 15 Hz), a następnie za pomocą elektrod elektroencefalografu mierzona jest odpowiedź elektryczna mózgu. W takich interfejsach sygnał zbierany jest zwykle w okolicach potylicznych (nad korą wzrokową). W prezentowanym rozwiązaniu sygnał mierzony jest z okolic czołowych. W przetwarzaniu i analizie sygnału zastosowano algorytmy statystycznego uczenia maszynowego. Do ekstrakcji cech sygnału wykorzystano Szybką Transformatę Fouriera, do selekcji cech: test t-Welcha, a do klasyfikacji cech: SVM oraz DLA. Na podstawie odpowiedzi uzyskanej z klasyfikatora możliwe jest np. wysterowanie kierunku skrętu robota mobilnego lub włączenie czy wyłączenie oświetlenia.
EN
The main aim of this article is to compare the effectiveness of the classification of the two classifiers used in brain-computer interfaces: SVM (Support Vector Machine) and LDA (Linear Discriminant Analysis). The article presents an interface in which the subject is presented the two stimuli flashing at different frequencies (10 and 15 Hz) and then by using EEG electrodes electrical response of the brain is measured. In these interfaces, the signal is typically collected in the occipital area (on the visual cortex). In the presented solution the signal is measured form the prefrontal cortex. For signal processing and analysis statistical machine learning algorithms were used. For features’ extraction Fast Fourier Transform was used. For features’ selection Welch’s t test was used. For features’ classification was used SVM and DLA. Based on the responses obtained from the classifier it is possible to control the direction of a mobile robot’s movement or turning the lights on and off.
EN
The Linear Discriminant Analysis (LDA) technique is an important and well-developed area of classification, and to date many linear (and also nonlinear) discrimination methods have been put forward. A complication in applying LDA to real data occurs when the number of features exceeds that of observations. In this case, the covariance estimates do not have full rank, and thus cannot be inverted. There are a number of ways to deal with this problem. In this paper, we propose improving LDA in this area, and we present a new approach which uses a generalization of the Moore–Penrose pseudoinverse to remove this weakness. Our new approach, in addition to managing the problem of inverting the covariance matrix, significantly improves the quality of classification, also on data sets where we can invert the covariance matrix. Experimental results on various data sets demonstrate that our improvements to LDA are efficient and our approach outperforms LDA.
EN
Traffic signs recognition involving digital image analysis is getting more and more popular. The main problem associated with visual recognition of traffic signs is associated with difficult conditions of image acquisition. In the paper we present a solution to the problem of signs occlusion. Presented method belongs to the group of appearance-based approaches, employing template matching working in the reduced feature space obtained by Linear Discriminant Analysis. The method deals with all types of signs, regarding their shape and color in contrast to commercial systems, installed in higher-class cars, that only detect the round speed limit signs and overtaking restrictions. Finally, we present some experiments performed on a benchmark databases with different kinds of occlusion.
EN
BCI systems analyze the EEG signal and translate patient intentions into simple commands. Signal processing methods are very important in such systems. Signal processing covers: preprocessing, feature extraction, feature selection and classification. In the article authors present the results of implementing linear discriminant analysis as a feature reduction technique for BCI systems.
PL
Systemy BCI analizują sygnał EEG i tłumaczą intencje użytkownika na proste polecenia. Ważnym elementem systemów BCI jest przetwarzanie sygnału. Obejmuje ono: przetwarzanie wstępne, ekstrakcję cech, selekcję cech i klasyfikację. W artykule autorzy prezentują wyniki badań z zastosowaniem liniowej analizy dyskryminacyjnej jako narzędzia do redukcji cech.
14
Content available remote Analysis of correlation based dimension reduction methods
EN
Dimension reduction is an important topic in data mining and machine learning. Especially dimension reduction combined with feature fusion is an effective preprocessing step when the data are described by multiple feature sets. Canonical Correlation Analysis (CCA) and Discriminative Canonical Correlation Analysis (DCCA) are feature fusion methods based on correlation. However, they are different in that DCCA is a supervised method utilizing class label information, while CCA is an unsupervised method. It has been shown that the classification performance of DCCA is superior to that of CCA due to the discriminative power using class label information. On the other hand, Linear Discriminant Analysis (LDA) is a supervised dimension reduction method and it is known as a special case of CCA. In this paper, we analyze the relationship between DCCA and LDA, showing that the projective directions by DCCA are equal to the ones obtained from LDA with respect to an orthogonal transformation. Using the relation with LDA, we propose a new method that can enhance the performance of DCCA. The experimental results show that the proposed method exhibits better classification performance than the original DCCA.
15
Content available remote Kernel Based Subspace Methods : Infrared vs Visible Face Recognition
EN
This paper investigates the use of kernel theory in two well-known, linear-based subspace representations: Principle Component Analysis (PCA) and Fisher's Linear Discriminant Analysis (FLD). The kernel-based method provides subspaces of high-dimensional feature spaces induced by some nonlinear mappings. The focus of this work is to evaluate the performances of Kernel Principle Component Analysis (KPCA) and Kernel Fisher's Linear Discriminant Analysis (KFLD) for infrared (IR) and visible face recognition. The performance of the kernel-based subspace methods is compared with that of the conventional linear algorithms: PCA and FLD. The main contribution of this paper is the evaluation of the sensitivities of both IR and visible face images to illumination conditions, facial expressions and facial occlusions caused by eyeglasses using the kernel-based subspace methods.
EN
"The curse of dimensionality" is pertinent to many learning algorithms, and it denotes the drastic increase of computational complexity and classification error in high dimensions. In this paper, principal component analysis (PCA). parametric feature extraction (FE) based on Fisher's linear discriminant analysis (LDA), and their combination as means of dimensionality reduction are analysed with respect to the performance of different classifiers. Three commonly used classifiers are taken for analysis: ŁNN, Naive Bayes and C4.5 decision tree. Recently, it has been argued that it is extremely important to use class information in FE for supervised learning (SL). However, LDA-based FE, although using class information, has a serious shortcoming due to its parametric nature. Namely, the number of extracted components cannot be more that the number of classes minus one. Besides, as it can be concluded from its name, LDA works mostly for linearly separable classes only. In this paper we study if it is possible to overcome these shortcomings adding the most significant principal components to the set of features extracted with LDA. In experiments on 21 benchmark datasets from UCI repository these two approaches (PCA and LDA) are compared with each other, and with their combination, for each classifier. Our results demonstrate that such a combination approach has certain potential, especially when applied for C4.5 decision tree learning. However, from the practical point of view the combination approach cannot be recommended for Naive Bayes since its behavior is very unstable on different datasets.
EN
The paper presents a novel method of reducing the dimensionality of large datasets (e.g. human faces databases). It does not incorporate any usual pre-processing stage (like down-scaling or filtering). Its main advantage is associated with efficient representation of images leading to the accurate recognition. The reduction is realized by modified Linear Discriminant Analysis. In the paper, the authors present its mathematical principles together with some results of practical recognition experiments on popular facial databases (ORL, BioID).
18
Content available remote Verification of the Credit Granting Decision by Selected Methods
EN
The aim of the paper is to present the results of application artificial neural networks to firm classification and to verification the credit granting decision made by the bank experts. The experiments are provided on the basis of data regarding 115 small enterprises that applied for a credit in two regional banks in Poland. The accuracy of classification is evaluated in terms of classification errors. To evaluate the efficiency of artificial neural networks we compare the ANN results to the ones that were obtained applying linear discriminant analysis and k-means method.
19
Content available remote Flow Control in a Single Connection ATM Network with a Limited Source Capability
EN
In this paper the problem of flow control in a single connection, fast communication network is considered. A new discrete time algorithm governing the source behaviour is proposed. The algorithm ensures full link utilisation and no cell loss in the controlled virtual circuit. Consequently, the need of cell retransmission is eliminated. These favourable properties are obtained not only when the ideal source is considered, but also in the case when at a particular period of time the source cannot send data at the rate determined by the controller.
first rewind previous Strona / 2 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.