Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 3

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  prototype selection
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Radial basis function networks (RBFNs) or extreme learning machines (ELMs) can be seen as linear combinations of kernel functions (hidden neurons). Kernels can be constructed in random processes like in ELMs, or the positions of kernels can be initialized by a random subset of training vectors, or kernels can be constructed in a (sub-)learning process (sometimes by k-means, for example). We found that kernels constructed using prototype selection algorithms provide very accurate and stable solutions. What is more, prototype selection algorithms automatically choose not only the placement of prototypes, but also their number. Thanks to this advantage, it is no longer necessary to estimate the number of kernels with time-consuming multiple train-test procedures. The best results of learning can be obtained by pseudo-inverse learning with a singular value decomposition (SVD) algorithm. The article presents a comparison of several prototype selection algorithms co-working with singular value decomposition-based learning. The presented comparison clearly shows that the combination of prototype selection and SVD learning of a neural network is significantly better than a random selection of kernels for the RBFN or the ELM, the support vector machine or the kNN. Moreover, the presented learning scheme requires no parameters except for the width of the Gaussian kernel.
EN
The Modified Condensed Nearest Neighbour (MCNN) algorithm for prototype selection is order-independent, unlike the Condensed Nearest Neighbour (CNN) algorithm. Though MCNN gives better performance, the time requirement is much higher than for CNN. To mitigate this, we propose a distributed approach called Parallel MCNN (pMCNN) which cuts down the time drastically while maintaining good performance. We have proposed two incremental algorithms using MCNN to carry out prototype selection on large and streaming data. The results of these algorithms using MCNN and pMCNN have been compared with an existing algorithm for streaming data.
3
Content available The PM-M prototype selection system
EN
In this paper, the algorithm, realizing the author’s prototype selection method, called PM-M (Partial Memory - Minimization) is described in details. Computational experiments that have been carried out with the raw PM-M model and with its majority ensembles indicate that even for the system, for which the average size of the selected prototype sets constitutes only about five percent of the size of the original training datasets, the obtained results of classification are still in a good statistical agreement with the 1-Nearest Neighbor (IB1) model which has been trained on the original (i.e. unpruned) data. It has also been shown that the system under study is competitive in terms of generalization ability with respect to other well established prototype selection systems, such as, for example, CHC, SSMA and GGA. Moreover, the proposed algorithm has shown approximately one to three orders of magnitude decrement of time requirements with respect to the necessary time, needed to complete the calculations, relative to the reference prototype classifiers, taken for comparison. It has also been demonstrated that the PM-M system can be directly applied to analysis of very large data unlike most other prototype methods, which have to rely on the stratification approach.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.