Regularization techniques are used for computing stable solutions to ill-posed problems. The well-known form of regularization is that of Tikhonov in which the regularized solution is searched as a minimiser of the weighted combination of the residual norm and a side constraint-controlled by the regularization parameter. For the practical choice of regularization parameter we can use the L-curve approach, U-curve criterion introduced by us [1] and empirical risk method [2]. We present a comparative study of different strategies for the regularization parameter choice on examples of function approximation by radial basis neural networks. Such networks are universal approximators and can learn any nonlinear mapping. e.g. representing an magnetic inverse problem. Some integral equations of the first kind are considered as well.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.