Pomiary pośrednie często polegają na estymacji parametrów modelu badanego obiektu, a proces estymacji może być źle uwarunkowana numerycznie. W celu poprawy uwarunkowania numerycznego stosowane są metody regularyzacji. Jednym z ostatnio zaproponowanych podejść do regularyzacji estymacji nieliniowej jest metoda iteracyjnej minimalizacji (IM), regulującej balans pomiędzy błędem systematycznym i losowym pomiaru pośredniego. Celem prezentowanych badań było porównanie doboru parametru regularyzacji za pomocą IM z powszechnie stosowną metodą Marquardta. W badaniach wykorzystano syntetyczne dane pomiarowe oraz metodę Monte Carlo. Z przeprowadzonych symulacji wynika, że algorytm IM ma lepsze właściwości metrologicznie niż algorytm Marquardta.
EN
Indirect measurements often amount to the estimation of parameters of a mathematical model that describes the object under investigation, and this process may numerically be ill conditioned. Various regularization techniques are used to solve the problem. The iterative minimisation (IM) is one of approaches proposed recently. It regulates the balance between systematic and random error of indirect measurement. The purpose of this study was to compare the selection of the regularisation parameter by IM with the commonly used Marquardt method. Synthetically generated measurement data and the Monte Carlo method were used to this end. From the performed simulations it stems that the IM algorithm has better metrological properties than Marquardt's one.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Regularization parameter selection (RPS) is one of the most important tasks in solving inverse problems. The most common approaches seek the optimal regularization parameter (ORP) from a sequence of candidate values. However, these methods are often time-consuming because they need to conduct the estimation process on all candidate values, and they are always restricted to solve certain problem types. In this paper, we propose a novel machine learning-based prediction framework (MLBP) for the RPS problem. The MLBP frst generates a large number of synthetic data by varying the inputs with diferent noise conditions. Then, MLBP extracts some pre-defned features to represent the input data and computes the ORP of each synthetic example by using true models. The pairs of ORP and extracted features construct a training set, which is used to train a regression model to describe the relationship between the ORP and input data. Therefore, for newly practical inverse problems, MLBP can predict their ORPs directly with the pre-trained regression model, avoiding wasting computational resources on improper regularization parameters. The numerical results also show that MLBP requires signifcantly less computing time and provides more accurate solutions for diferent tasks than traditional methods. Especially, even though the MLBP trains the regression model on synthetic data, it can also achieve satisfying performance when directly applied to feld data.
3
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Regularization techniques are used for computing stable solutions to ill-posed problems. The well-known form of regularization is that of Tikhonov in which the regularized solution is searched as a minimiser of the weighted combination of the residual norm and a side constraint-controlled by the regularization parameter. For the practical choice of regularization parameter we can use the L-curve approach, U-curve criterion introduced by us [1] and empirical risk method [2]. We present a comparative study of different strategies for the regularization parameter choice on examples of function approximation by radial basis neural networks. Such networks are universal approximators and can learn any nonlinear mapping. e.g. representing an magnetic inverse problem. Some integral equations of the first kind are considered as well.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.