Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 18

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  function approximation
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Artificial neural networks are essential intelligent tools for various learning tasks. Training them is challenging due to the nature of the data set, many training weights, and their dependency, which gives rise to a complicated high-dimensional error function for minimization. Thus, global optimization methods have become an alternative approach. Many variants of differential evolution (DE) have been applied as training methods to approximate the weights of a neural network. However, empirical studies show that they suffer from generally fixed weight bounds. In this research, we propose an enhanced differential evolution algorithm with adaptive weight bound adjustment (DEAW) for the efficient training of neural networks. The DEAW algorithm uses small initial weight bounds and adaptive adjustment in the mutation process. It gradually extends the bounds when a component of a mutant vector reaches its limits. We also experiment with using several scales of an activation function with the DEAW algorithm. Then, we apply the proposed method with its suitable setting to solve function approximation problems. DEAW can achieve satisfactory results compared to exact solutions.
PL
Sztuczne sieci neuronowe są niezbędnymi inteligentnymi narzędziami do realizacji różnych zadań uczenia się. Ich szkolenie stanowi wyzwanie ze względu na charakter zbioru danych, wiele wag treningowych i ich zależności, co powoduje powstanie skomplikowanej, wielowymiarowej funkcji błędu do minimalizacji. Dlatego alternatywnym podejściem stały się metody optymalizacji globalnej. Wiele wariantów ewolucji różnicowej (DE) zostało zastosowanych jako metody treningowe do aproksymacji wag sieci neuronowej. Jednak badania empiryczne pokazują, że cierpią one z powodu ogólnie ustalonych granic wag. W tym badaniu proponujemy ulepszony algorytm ewolucji różnicowej z adaptacyjnym dopasowaniem granic wag (DEAW) dla efektywnego szkolenia sieci neuronowych. Algorytm DEAW wykorzystuje małe początkowe granice wag i adaptacyjne dostosowanie w procesie mutacji. Stopniowo rozszerza on granice, gdy składowa wektora mutacji osiąga swoje granice. Eksperymentujemy również z wykorzystaniem kilku skal funkcji aktywacji z algorytmem DEAW. Następnie, stosujemy proponowaną metodę z jej odpowiednim ustawieniem do rozwiązywania problemów aproksymacji funkcji. DEAW może osiągnąć zadowalające rezultaty w porównaniu z rozwiązaniami dokładnymi.
EN
Two ways of approximation of the BEM kernel singularity are presented in this paper. Based on these approximations extensive error analysis was carried on. As a criterion the preciseness and simplicity of approximation were selected. Simplicity because such approach would be applied for the tomography problems, so time of execution plays particularly significant role. One of the approximations which could be applied for the wide range of the arguments of the kernel were selected.
PL
Dwie metody aproksymacji osobliwości funkcji Greena zaproponowano w tej pracy. Bazując na tych aproksymacjach dokonano wnikliwej analizy błędów. Jako kryterium wybrano dokładność i prostotę zaproponowanych aproksymacji. Prostotę dlatego, że takie podejście będzie proponowane w zagadnieniach tomograficznych. Tak więc czas odgrywa zasadniczą rolę. Wybrano aproksymację, która może być stosowana dla szerokiego zakresu argumentów.
3
EN
We review recent work characterizing the classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage.
4
Content available remote Mini-model method based on k-means clustering
EN
Mini-model method (MM-method) is an instance-based learning algorithm similarly as the k-nearest neighbor method, GRNN network or RBF network but its idea is different. MM operates only on data from the local neighborhood of a query. The paper presents new version of the MM-method which is based on k-means clustering algorithm. The domain of the model is calculated using k-means algorithm. Clustering method makes the learning procedure simpler.
PL
Metoda mini-modeli (metoda MM) jest algorytmem bazującym na próbkach podobnie jak metoda k-najbliższych sąsiadów, sieć RBF czy sieć GRNN ale jej zasada działania jest inna. MM operuje tylko na danych z najbliższego otoczenia punktu zapytania. Artykuł prezentuje nową wersję metody MM, która bazuje na algorytmie k-średnich. Domena MM jest obliczana przy pomocy algorytmu k-średnich. Użycie algorytmu klasteryzacji uprościło procedurę uczenia.
EN
In the present investigation, artificial neural networks are applied to model scattering and absorption properties occurring in particle radiation interaction for numerical simulation of pulverized coal combustion. To determine averaged scattering and absorption properties, an averaging procedure over spectral incident radiation profile and particle size distribution is applied. These averaged properties then are approximated by means of an artificial neural network. A study to determine a suitable network architecture is performed.
EN
Different groups of free radicals exist in biological material like animal tissues or plants parts. The processes like heating or cooling creates additional types of free radicals groups in this organic matter, due to changes in chemical bonds. The paper proposes a method to determine types and concentrations of different groups of free radicals in the matter processed in various temperatures. The method extracts the spectrum of free radicals using electron paramagnetic resonance with the microwave power of 2.2 mW. Then an automatic method to find a best possible fit using limited number of theoretical mathematical functions is proposed. The match is found using spectrum filtration, and a genetic algorithm implementation supported by a Gradient Method. The obtained results were compared against the samples prepared by an expert. Finally, some remarks were given and new possibilities for future research were proposed.
7
EN
Mini-models are local regression models, which can be used for the function approximation learning. In the paper, there are presented mini-models based on hyper-spheres and hyper-ellipsoids and researches were made for linear and nonlinear models with no limitations for the problem input space dimension. Learning of the approximation function based on mini-models is very fast and it proved to have a good accuracy. Mini-models have also very advantageous extrapolation properties.
PL
Mini-modele to modele lokalnej regresji, które można wykorzystać do aproksymacji funkcji. W artykule opisano mini-modele o bazie hiper-sferycznej i hiper-elipsoidalnej oraz badania dla mini-modeli linowych i nieliniowych bez ograniczeń na rozmiar przestrzeni wejść. Uczenie aproksymującej funkcji opartej na mini-modelach jest szybkie, a sama funkcja ma dobrą dokładność i korzystne własności ekstrapolacyjne.
EN
The paper describes a new method based on the information-gap theory which enables an evaluation of worst case error predictions of the kNN method in the presence of a specified level of uncertainty in the data. There are presented concepts of a robustness and an opportunity of the kNN model and calculations of these concepts were performed for a simple 1-D data set and next, for a more complicated 6-D data set. In both cases the method worked correctly and enabled evaluation of the robustness and the opportunity for a given lowest acceptable quality rc or a windfall quality rw. The method enabled also choosing of the most robust kNN model for a given level of an uncertainty [alfa].
PL
W artykule opisane jest zastosowanie teorii luk informacyjnych do określania największego błędu modelu kNN w przypadku wystąpienia w danych niepewności o określonym poziomie. Przedstawione zostały pojęcia odporności i sposobności modelu kNN oraz pokazane zostały przykłady ich wyznaczania dla prostych danych jednowejściowych i bardziej złożonych, sześciowejściowych. W obu przypadkach metoda działała prawidłowo, a dodatkowo umożliwiała wyznaczanie najbardziej odpornego modelu kNN przy określonym poziomie niepewności [alfa].
9
Content available On some stability properties of polynomial functions
EN
In this paper we present conditions under which a function F with a control function f, in the following sense [wzór], can by uniformly approximated by a polynomial function of degree at most n.
PL
W artykule wskazano na pewne aspekty związane z implementacją jednokierunkowej sieci neuronowej w architekturze równoległej z wykorzystaniem standardu przesyłania komunikatów MPI. Zaprezentowany przykład zastosowania sieci dotyczy klasycznego problemu aproksymacji funkcji. Zbadano wpływ liczby uruchamianych procesów na efektywność procedury uczenia i działania sieci oraz zademonstrowano negatywny wpływ opóźnień powstałych przy przesyłaniu danych za pomocą sieci LAN.
EN
In the paper some characteristic features concerning feed-forward neural network implementation in parallel computer architecture using MPI communication protocol are investigated. Two fundamental methods of neural network parallelization are described: neural (Fig. 1) as well as synaptic parallelization (Fig. 2). Based on the presented methods, an original application implementing feed-forward multilayer neural network was built. The application includes: a Java runtime interface (Fig. 3) and a computational module based on the MPI communication protocol. The simulation tests consisted in neural network application to classical problem of nonlinear function approximation. Effect of the number of processes on the network learning efficiency was examined (Fig. 4, Tab. 1). The negative effect of transmission time delays in the LAN is also demonstrated in the paper. The authors conclude that computational advantages of neural networks parallelization on a heterogeneous cluster consisting of several personal computers will become apparent only in the case of very complex neural networks, composed of many thousands of neurons.
PL
W artykule wskazano na pewne charakterystyczne aspekty związane z zastosowaniem jednokierunkowych sieci neuronowych jako uniwersalnych układów aproksymujących złożone zależności nieliniowe. Zaprezentowany przykład dotyczy klasycznego problemu z dziedziny robotyki -tzw. odwrotnego zadania kinematyki. Zademonstrowano wpływ właściwego doboru struktury sieci, jej algorytmu uczenia oraz wzorców uczących na jakość aproksymacji neuronowej.
EN
Characteristic features of feedforward artificial neural networks, acting as universal function approximators, are presented. The problem under consideration concerns inverse kinematics of a two-link planar manipulator (Fig. 1). As shown in this paper, a two-layer, feedforward neural network is able to learn the nonlinear mapping between the end effector position domain and the joint angle domain of the manipulator (Fig. 2). However, a necessary condition for achieving the required approximation quality is proper selection of the network structure, especially with respect to the number of nonlinear, sigmoidal units in its hidden layer. Using too few neurons in this layer results in underfitting (Fig. 3), while too many neurons bring the problem of overfitting (Figs 6 and 7). The effect of learning algorithm efficiency as well as proper choice of learning data set on the network performance is also demonstrated (Fig. 8). Apart from the general conclusions concerning neural approximation, the presented results show also the possibility of neural control of robotic manipulator trajectory.
12
Content available remote Approximation Spaces in Rough–Granular Computing
EN
We discuss some generalizations of the approximation space definition introduced in 1994 [24, 25]. These generalizations are motivated by real-life applications. Rough set based strategies for extension of such generalized approximation spaces from samples of objects onto their extensions are discussed. This enables us to present the uniform foundations for inducing approximations of different kinds of granules such as concepts, classifications, or functions. In particular, we emphasize the fundamental role of approximation spaces for inducing diverse kinds of classifiers used in machine learning or data mining.
13
Content available remote Optimization in Discovery of Compound Granules
EN
The problem considered in this paper is the evaluation of perception as a means of optimizing various tasks. The solution to this problem hearkens back to early research on rough set theory and approximation. For example, in 1982, Ewa Orowska observed that approximation spaces serve as a formal counterpart of perception. In this paper, the evaluation of perception is at the level of approximation spaces. The quality of an approximation space relative to a given approximated set of objects is a function of the description length of an approximation of the set of objects and the approximation quality of this set. In granular computing (GC), the focus is on discovering granules satisfying selected criteria. These criteria take inspiration from the minimal description length (MDL) principle proposed by Jorma Rissanen in 1983. In this paper, the role of approximation spaces in modeling compound granules satisfying such criteria is discussed. For example, in terms of approximation itself, this paper introduces an approach to function approximation in the context of a reinterpretation of the rough integral originally proposed by Zdzisaw Pawlak in 1993. We also discuss some other examples of compound granule discovery problems that are related to compound granules representing process models and models of interaction between processes or approximation of trajectories of processes. All such granules should be discovered from data and domain knowledge. The contribution of this article is a proposed solution approach to evaluating perception that provides a basis for optimizing various tasks related to discovery of compound granules representing rough integrals, process models, their interaction, or approximation of trajectories of discovered models of processes.
EN
Learning Classifier Systems are Evolutionary Learning mechanisms which combine Genetic Algorithm and the Reinforcement Learning paradigm. Learning Classifier Systems try to evolve state-action-reward mappings to propose the best action for each environmental state to maximize the achieved reward. In the first versions of learning classifier systems, state-action pairs can only be mapped to a constant real-valued reward. So to model a fairly complex environment, LCSs had to develop redundant state-action pairs which had to be mapped to different reward values. But an extension to a well-known LCS, called Accuracy Based Learning Classifier System or XCS, was recently developed which was able to map state-action pairs to a linear reward function. This new extension, called XCSF, can develop a more compact population than the original XCS. But some further researches have shown that this new extension is not able to develop proper mappings when the input parameters are from certain intervals. As a solution to this issue, in our previous works, we proposed a novel solution inspired by the idea of using evolutionary approach to approximate the reward landscape. The first results seem promising, but our approach, called XCSFG, converged to the goal very slowly. In this paper, we propose a new extension to XCSFG which employs micro-GA which its needed population is extremely smaller than simple GA. So we expect micro-GA to help XCSFG to converge faster. Reported results show that this new extension can be assumed as an alternative approach in XCSF family with respect to its convergence speed, approximation accuracy and population compactness.
15
Content available remote Image denoising based on wavelet support vector regression
EN
Denoising is an important application of image processing. We have constructed a denoising system which learns an optimal mapping from the input data to denoised data. The Morlet wavelet was used as the kernel function to construct the wavelet support vector machine. The noised image data is mapped to denoised values by wavelet support vector regression. The result shows that denoising via wavelet support vector regression could perform better than Gaussian smoothing, median filtering and average filtering on the experimental image and it also performs better than Gaussian radial basic function support vector regression.
PL
W niniejszej pracy scharakteryzowana jest aproksymacja przyrostowa funkcji realizowana za pomocą sieci neuronowych z jedną warstwą ukrytą. Liczba neuronów w warstwie ukrytej jest dobierana dynamicznie w trakcie procesu aproksymacji. W każdej iteracji wyznacza się parametry tylko jednego neuronu. Warstwa wyjściowa jest liniowa, w której jest wyznaczany najlepszy aproksymator przez rzutowanie ortogonalne. Funkcje aproksymowane pochodzą z przestrzeni Hilberta. W trakcie procesu aproksymacji wyznaczane i przechowywane są dwie bazy: baza dla implementacji i rozpinająca tę samą przestrzeń pomocnicza baza ortonormalna. Rozważa się zagadnienia adaptacyjnego wyboru bazy, funkcjonałów celu w aproksymacji przyrostowej, funkcji wygładzających, uzależnienia szybkości aproksymacji i wag w warstwie wyjściowej, uporządkowania bazy w aproksymacji przyrostowej, aproksymacji pola wektorowego i regularyzacji w sieciach przyrostowych. W pracy określono warunki zmniejszania błędu w każdej iteracji, warunek najszybszego spadku błędu w każdej iteracji, zdefiniowano funkcję dokładności aproksymacji i wykazano, że jej norma dla kolejnych funkcji bazowych wpływa zasadniczo na wartości wag warstwy wyściowej, szybkość aproksymacji i gładkość (nieoscylacyjność) otrzymywanych rozwiązań. Zaproponowano sposób kontrolowania tej wielkości. Rozważania mają charakter konstruktywny i prowadzą do konkretnych efektywnych algorytmów aproksymacji. Praca zawiera przykłady zastosowań.
EN
In this thesis incremental function approximation by one-hidden-layer neural networks is characterized. During approximation process, number of neurons in hidden layer is dynamically selected. In each iteration oniy one neuron parameters are tuned. Output layer is linear and determines the best approximation via orthogonal projection transformation. Approximation is provided in Hilbert space. During approximation process two space basis are determined: a basis for implementation and an assisted orthonormal basis which spans the same space. The following topics are considered: adaptive selection of basis functions, target functionals, smoothing functions, relation between rate of approximation and weights value in the output layer, ordering of basis, vector field approximation and regularization in the incremental approximation. We formulated conditions for decreasing of error in every iteration and conditions for fastest error decrease. There was defined a function of accuracy and proved that its norm for subsequent basis functions largely influence the output weights, rate of approximation and smoothness (nonoscilatory) of obtained solutions. A way of controlling its value was suggested. Our considerations are constructive and end with efficient approximation algorithms. Examples of applications are included.
17
Content available remote Backpropagation versus dynamic programming approach
EN
Feedforward neural networks are usually used for functions approximation [1]. This feature of such a class of networks is explained in the paper by Cybenko [2]. In literature we can find many different application of neural networks as universal approximators [3], [4]. It seems that one of the most interesting application (not trivial at all example) of neural networks is the reconstruction of attractors of chaotic time series. The analysis of time series is mainly based on the embedding theorem by Takens, [5]. The commonly used algorithm for neural networks learning, called the backpropagation algorithm [6], is a gradient descent method for searching a minimum of a function. This kind of algorithm for nonconvex function (like the learning error function of feedforward neural networks) often stops in a local minimum. Even various modifications of this algorithm [9] still cannot avoid local minimal points. Up to now, in practice, the only way to try to find a near global optimum solution is to perform computations several times with different starting initial weights values and to choose the best solution. In this work we propose a new global optimization algorithm for neural networks learning - one that allows to find a global minimum of the learning error, at least theoretically. The algorithm is based on dynamic programming [10], [11], namely the learning of multilayer neural networks is considered as a special case of a multistage optimal control problem [12]. In such a case layers ale treated as stages while weights as controls. The problem of optimal weight adjustment is converted into a problem of optimal control. The multistage optimal control problem can be solved in various ways, as, e.g., by the application of dynamic programming [13]. Within the backpropagation framework, weights ale tuned layer-by-layer as well as step-by-step to minimize the learning error. Meanwhile, in the new algorithm for each layer starting from the output layer, a return function is connstructed first, and then this function is minimized wit h respect to weights. This procedure is performed stage-by-stage, that is layer-by-layer.
18
Content available remote Reformulating Learning Vector Quantization and Radial Basis neural networks
EN
This paper proposes a framework for developing a broad variety of soft clustering and learn-ing vector quantization (LVQ) algorithms based on gradient descent minimization of a refor-mulation function. According to the proposed axiomatic approach to learning vector quantiza-tion, the development of specific algorithm reduces to the selection of a generator function. A linear generator function lead to the fuzzy c-means (FCM) and fuzzy LVQ (FLVQ) algo-rithms while an exponential generator function leads to entropy constrained fuzzy clustering (ECFC) and entropy constrained LVQ (ECLVQ) algorithms. The reformulation of clustering and LVQ algorithms is also extended to supervised learning models through an axiomatic approach proposed for reformulating radial besis function (RBF) neutral networks. This ap-proach results in a broad variety of admissible RBF models, while the form of the radial basis functions is determined by a generator function. This paper shows that gradient descent learn-ing makes reformulated RBF neural networks an attractive alternative to conventional feed-forward neural networks.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.