An adaptive and precise peak wavelength detection algorithm for fibre Bragg grating using generative adversarial network is proposed. The algorithm consists of generative model and discriminative model. The generative model generates a synthetic signal and is sampled for training using a deep neural network. The discriminative model predicts the real fibre Bragg grating signal by the calculation of the loss functions. The maxima of loss function of the discriminative signal and the minima of loss function of the generative signal are matched and the desired peak wavelength of fibre Bragg grating is determined. The proposed algorithm is verified theoretically and experimentally for a single fibre Bragg grating peak. The accuracy has been obtained as ±0.2 pm. The proposed algorithm is adaptive in the sense that any random fibre Bragg grating peak can be identified within a short wavelength range.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
In this paper we discuss the introduction of continuous time assumption in automobile insurance premium calculation system for quadratic and LINEX loss functions. Such assumption corresponds to the situation when information and the premiums are continually flowing to the insurer. A short discussion on "hunger for bonus effect" for a model with the quadratic loss function is also included. Then we incorporate continuous time assumption into the classical bonus - malus system.
The paper discusses the method of determining the sample division indicator for the switching regression model in case of two states generating values of the explained variable, which ensures the least risk of making a mistake, understood as the expected value of relevant loss function. This paper is an attempt to take advantage of the discrimination analysis elements in the switching regression analysis.
4
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Deep neural networks are currently among the most commonly used classifiers. Despite easily achieving very good performance, one of the best selling points of these models is their modular design – one can conveniently adapt their architecture to specific needs, change connectivity patterns, attach specialised layers, experiment with a large amount of activation functions, normalisation schemes and many others. While one can find impressively wide spread of various configurations of almost every aspect of the deep nets, one element is, in authors’ opinion, underrepresented – while solving classification problems, vast majority of papers and applications simply use log loss. In this paper we try to investigate how particular choices of loss functions affect deep models and their learning dynamics, as well as resulting classifiers robustness to various effects. We perform experiments on classical datasets, as well as provide some additional, theoretical insights into the problem. In particular we show that L1 and L2 losses are, quite surprisingly, justified classification objectives for deep nets, by providing probabilistic interpretation in terms of expected misclassification. We also introduce two losses which are not typically used as deep nets objectives and show that they are viable alternatives to the existing ones.
We present a novel modification of context encoder loss function, which results in more accurate and plausible inpainting. For this purpose, we introduce gradient attention loss component of loss function, to suppress the common problem of inconsistency in shapes and edges between the inpainted region and its context. To this end, the mean absolute error is computed not only for the input and output images, but also for their derivatives. Therefore, model concentrates on areas with larger gradient, which are crucial for accurate reconstruction. The positive effects on inpainting results are observed both for fully-connected and fully-convolutional models tested on MNIST and CelebA datasets.
One of the requirements of the process approach is to identify the methods and evaluation criteria for process measurement. The effectiveness described as the ability to execute scheduled tasks and the objectives may be the measure used to evaluate processes. The article presents a few concepts of efficiency indicators that can be used in assessing the activities carried out within the framework of the implementation of new projects, according to APQP&PPAP guidelines. This paper proposes four concepts of indicators to assess the effectiveness of the above-described process, including index based on the Taguchi loss function.
The exposure of Toyota management’s cover-up of its faulty car component problems raises a fundamental question: did Toyota management make an appropriate decision taking all uncertainties into account? Statistical decision theory is a framework with a probabilistic foundation, which admits random uncertainty about the real world and human thinking. In general, the uncertainty of the real world is diversified and therefore the effort of trying to deal with different forms of uncertainty with one special form of uncertainty, namely random uncertainty, may be oversimplified. In this paper, we introduce an axiomatic uncertain measure theoretical framework and explore the essential mechanism in formulating a general uncertainty decision theory. We expect that a new understanding of uncertainty and development of a corresponding new uncertainty decision-making approach may assist intelligence communities to survive and deal with the extremely tough and diverse aspects of an uncertain reality.
9
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Paper presents theoretical fundamentals for process cost analysis as based on the loss functions proposed by Taguchi. The components of the loss function are taken into consideration as well as the cost of 100% quality inspection. The cost of the faulty products (predicted), products that were not accepted by customers, costs of selection and statistical monitoring are also estimated. An example to illustrate the problem has been provided.
PL
W pracy przedstawiono teoretyczne podstawy analizy kosztów zapewniania jakości oparte na zaproponowanej przez Taguchiego funkcji strat. Rozpatrzono składniki kosztów sterowania statystycznego oraz 100% inspekcji wyrobów. Uwzględniono koszty określające prawdopodobną liczbę braków, koszty wyrobów nieakceptowalnych przez konsumentów, koszty selekcji oraz koszty nadzorowania statystycznego. Znaczenie metody zostało zilustrowane przykładem obliczeniowym.
10
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
In this work, we propose the P3 Learning to Rank (P3LTR) model, a generalization of the RP3Beta graph-based recommendation method. In our approach, we learn the importance of user-item relations based on features that are usually available in online recommendations (such as types of user-item past interactions and timestamps). We keep the simplicity and explainability of RP3Beta predictions. We report the improvements of P3LTR over RP3Beta on the OLX Jobs Interactions dataset, which we published.
The purpose of this paper is to determine one factor which represents the whole market behavior on the basis of the rates of return of all equities traded oo this market. In the seminaal Sharpe model the factor is an exogenous varialble which is not determined by the model itself. This paper extends Sharpe's idea, as it assumes that the factor is a linear combination of all the rates of return of all traded equities. To determine this coefiicients of this linear combination we minimize the loss function which expresses the weighted mean square deviation of all rates of return from their predictions, having given the linear combination form of the market index. It is found that the vector of linear coeffcients has to be a nonzero eigenvector associated with the maximal eigenvalue of the appropriately transformed and estimated covariance matrix. The optimal market index for the Warsaw Stock Exchange was compared with the standard index. It occurs that there is only a very small difference between the standard index of this market and the optimal index.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.