Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  metoda jądrowa
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Radial basis function networks (RBFNs) or extreme learning machines (ELMs) can be seen as linear combinations of kernel functions (hidden neurons). Kernels can be constructed in random processes like in ELMs, or the positions of kernels can be initialized by a random subset of training vectors, or kernels can be constructed in a (sub-)learning process (sometimes by k-means, for example). We found that kernels constructed using prototype selection algorithms provide very accurate and stable solutions. What is more, prototype selection algorithms automatically choose not only the placement of prototypes, but also their number. Thanks to this advantage, it is no longer necessary to estimate the number of kernels with time-consuming multiple train-test procedures. The best results of learning can be obtained by pseudo-inverse learning with a singular value decomposition (SVD) algorithm. The article presents a comparison of several prototype selection algorithms co-working with singular value decomposition-based learning. The presented comparison clearly shows that the combination of prototype selection and SVD learning of a neural network is significantly better than a random selection of kernels for the RBFN or the ELM, the support vector machine or the kNN. Moreover, the presented learning scheme requires no parameters except for the width of the Gaussian kernel.
2
Content available remote Kernel Ho-Kashyap classifier with generalization control
EN
This paper introduces a new classifier design method based on a kernel extension of the classical Ho-Kashyap procedure. The proposed method uses an approximation of the absolute error rather than the squared error to design a classifier, which leads to robustness against outliers and a better approximation of the misclassification error. Additionally, easy control of the generalization ability is obtained using the structural risk minimization induction principle from statistical learning theory. Finally, examples are given to demonstrate the validity of the introduced method.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.