Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 9

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  kernel methods
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
In recent years, kernel methods have provided an important alternative solution, as they offer a simple way of expanding linear algorithms to cover the non-linear mode as well. In this paper, we propose a novel recursive kernel approach allowing to identify the finite impulse response (FIR) in non-linear systems, with binary value output observations. This approach employs a kernel function to perform implicit data mapping. The transformation is performed by changing the basis of the data In a high-dimensional feature space in which the relations between the different variables become linearized. To assess the performance of the proposed approach, we have compared it with two other algorithms, such as proportionate normalized least-meansquare (PNLMS) and improved PNLMS (IPNLMS). For this purpose, we used three measurable frequency-selective fading radio channels, known as the broadband radio access Network (BRAN C, BRAN D, and BRAN E), which are standardized by the European Telecommunications Standards Institute (ETSI), and one theoretical frequency selective channel, known as the Macchi’s channel. Simulation results show that the proposed algorithm offers better results, even in high noise environments, and generates a lower mean square error (MSE) compared with PNLMS and IPNLMS.
EN
Estimating the connectivity of a network from events observed at each node has many applications. One prominent example is found in neuroscience, where spike trains (sequences of action potentials) are observed at each neuron, but the way in which these neurons are connected is unknown. This paper introduces a novel method for estimating connections between nodes using a similarity measure between sequences of event times. Specifically, a normalized positive definite kernel defined on spike trains was used. The proposed method was evaluated using synthetic and real data, by comparing with methods using transfer entropy and the Victor-Purpura distance. Synthetic data was generated using CERM (Coupled Escape-Rate Model), a model that generates various spike trains. Real data recorded from the visual cortex of an anaesthetized cat was analyzed as well. The results showed that the proposed method provides an effective way of estimating the connectivity of a network when the time sequences of events are the only available information.
EN
Supervised kernel-Principal Component Analysis (S-kPCA) is a me thod for producing discriminative feature spaces that provide nonlinear decision regions, well-suited for handling real-world problems. The presented paper proposes a modification to the original S-kPCA concept, which is aimed at improving class-separation in resulting feature spaces. This is accomplished by identifying outliers (understood here as misclassified samples) and by an appropriate reformulation of the original S-kPCA problem. The proposed idea is to replace binary class labels that are used in the original method, by real-valued ones, derived using sample-relabeling scheme aimed at preventing potential data classification problems. The postulated concept has been tested on three standard pattern recognition datasets. It has been shown that classification performance in feature spaces derived using the introduced methodology improves by 4–16% with respect to the original S-kPCA method, depending on a dataset.
PL
W pracy poddano analizie materiał uziarniony, którym był węgiel kamienny pobrany z jednej z kopalń Górnego Śląska. Węgiel został pobrany z osadzarki miałowej, gdzie został rozdzielony na koncentrat i odpad. Poddano go przesiewaniu, a następnie rozdziałowi w cieczach ciężkich. Zarówno skład ziarnowy, jak i gęstościowy nadawy oraz koncentratu został zaproksymowany kilkoma klasycznymi rozkładami statystycznymi. Najlepsze rezultaty otrzymano przy zastosowaniu rozkładu Weibulla (RRB). Jednakże – ze względu na niezadowalającą jakość aproksymacji – zdecydowano się na zastosowanie nieparametrycznych metod statystycznych, które stają się coraz częściej stosowanymi alternatywami w badaniach statystycznych. W pracy zastosowano nieparametryczne metody jądrowe, a jako funkcję jądrową przyjęto jądro Gaussa. Metoda jądrowa, stosunkowo nowa, dała znacznie lepsze wyniki aproksymacji niż klasyczne rozkłady statystyczne przy zastosowaniu metody najmniejszych kwadratów. Zarówno klasyczne, jak i nieparametryczne otrzymane aproksymanty zostały ocenione za pomocą średniego błędu kwadratowego, którego wartości świadczą o tym, że dostatecznie dobrze przybliżają one wartości otrzymane empirycznie. Tak określone postacie funkcji posłużyły następnie do wyznaczenia dystrybuanty teoretycznej dla wektora (D, Ρ), gdzie D – oznacza zmienną losową opisującą wielkość ziarna, a Ρ – jego gęstość. Również i ta aproksymacja w sposób zadowalający oddała rzeczywistość. Dlatego posłużyła ona do wyznaczenia równania powierzchni rozdziału, zależnej od obu zmiennych, wielkości i gęstości ziarna, opisujących badany materiał. Otrzymana powierzchnia świadczy o tym, że możliwa jest ocena procesu rozdziału, jaki zachodzi podczas operacji przeróbczych za pomocą więcej niż jednej cechy badanego materiału, a ponadto jej jakość potwierdza, że słusznym był wybór nieparametrycznych metod statystycznych jako alternatywy dla powszechnie stosowanych metod aproksymacyjnych.
EN
In this paper, the grained material analyzed was hard coal collected from one of the mines located in Upper Silesia. Material was collected from a dust jig where it was separated in industrial conditions by concentrate and waste. It was then screened in sieves and it was separated in dense media into density fractions. Both particle size distribution and particle density distribution for feed and concentrate were approximated by several classical distribution functions. The best results were obtained by means of the Weibull (RRB) distribution function. However, because of the unsatisfying quality of approximations it was decided to apply non-parametric statistical methods, which became more and more popular alternative methods in conducting statistical investigations. In the paper, the kernel methods were applied to this purpose and the Gauss kernel was accepted as the kernel function. Kernel method, which is relatively new, gave much better results than classical distribution functions by means of the least squared method. Both classical and non-parametric obtained distribution functions were evaluated by means of mean standard error, the values of which proved that they sufficiently well approximate the empirical data. Such function forms were then applied to determine the theoretical distribution function for vector (D, Ρ), where D is the random variable describing particle size and Ρ – its density. This approximation was sufficiently acceptable. That is why it served to determine the equation of partition surface dependent on particle size and particle density describing researched material. The obtained surface proves that it is possible to evaluate material separation which occurs during mineral processing operations, such as jigging, by means of more than one feature of researched material. Furthermore, its quality confirms that it is justified to apply non-parametric statistical methods instead of commonly used classical ones.
EN
The presented paper is concerned with feature space derivation through feature selection. The selection is performed on results of kernel Principal Component Analysis (kPCA) of input data samples. Several criteria that drive feature selection process are introduced and their performance is assessed and compared against the reference approach, which is a combination of kPCA and most expressive feature reordering based on the Fisher linear discriminant criterion. It has been shown that some of the proposed modifications result in generating feature spaces with noticeably better (at the level of approximately 4%) class discrimination properties.
PL
W artykule przedstawiono nieparametryczną ocenę wskaźników niezawodnościowych krajowych operatorów systemu dystrybucyjnego. Na podstawie dostępnych danych, wykorzystując estymatory jądrowe, wyznaczono rozkłady gęstości prawdopodobieństwa występowania wartości wskaźników SAIDI, SAIFI oraz MAIFI krajowego systemu dystrybucyjnego.
EN
The paper presents the non parametric assessment of reliability networks exploited in the Polish energetic system. The empirical probability density functions of the SAIDI, SAIFI and MAIFI indices of domestic distribution system have been determined on the basis of admissible data with use of kernel estimators.
PL
W artykule zaprezentowano zastosowanie nowej, nieliniowej wersji algorytmu LMS wykorzystującej funkcje kernelowe do identyfikacji systemów nieliniowych. Aby ograniczyć ilość wektorów nośnych, będących niezbędnym elementem algorytmów opartych o metody kernelowe zastosowano kryterium selekcji. Nowy wektor wejściowy jest przyjmowany do słownika, a następnie w słowniku wyszukiwany i usuwany jest wektor, który ma najmniejszy wpływ na tworzony model nieliniowy. Przedstawiony przykład identyfikacji systemu nieliniowego potwierdza skuteczność porównywalną do algorytmów wykorzystujących większą liczbę wektorów nośnych.
EN
In this paper a new version of kernel normalized least mean squares algorithm is applied to identification of nonlinear system. To maintain a fixed amount of support vectors, requisite for practical kernel-based algorithm, a pruning criterion is used. After admitting a new input vector to the dictionary, a least important entry is selected and discarder. A case of nonlinear system identification is presented, proving that algorithm performs well and it can maintain a performance comparable to state-of-the-art algorithms, using smaller number of support vectors.
8
Content available remote Kernel Based Subspace Methods : Infrared vs Visible Face Recognition
EN
This paper investigates the use of kernel theory in two well-known, linear-based subspace representations: Principle Component Analysis (PCA) and Fisher's Linear Discriminant Analysis (FLD). The kernel-based method provides subspaces of high-dimensional feature spaces induced by some nonlinear mappings. The focus of this work is to evaluate the performances of Kernel Principle Component Analysis (KPCA) and Kernel Fisher's Linear Discriminant Analysis (KFLD) for infrared (IR) and visible face recognition. The performance of the kernel-based subspace methods is compared with that of the conventional linear algorithms: PCA and FLD. The main contribution of this paper is the evaluation of the sensitivities of both IR and visible face images to illumination conditions, facial expressions and facial occlusions caused by eyeglasses using the kernel-based subspace methods.
9
Content available remote Kernel Ho-Kashyap classifier with generalization control
EN
This paper introduces a new classifier design method based on a kernel extension of the classical Ho-Kashyap procedure. The proposed method uses an approximation of the absolute error rather than the squared error to design a classifier, which leads to robustness against outliers and a better approximation of the misclassification error. Additionally, easy control of the generalization ability is obtained using the structural risk minimization induction principle from statistical learning theory. Finally, examples are given to demonstrate the validity of the introduced method.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.