Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
  • Sesja wygasła!

Znaleziono wyników: 9

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  loss function
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
An adaptive and precise peak wavelength detection algorithm for fibre Bragg grating using generative adversarial network is proposed. The algorithm consists of generative model and discriminative model. The generative model generates a synthetic signal and is sampled for training using a deep neural network. The discriminative model predicts the real fibre Bragg grating signal by the calculation of the loss functions. The maxima of loss function of the discriminative signal and the minima of loss function of the generative signal are matched and the desired peak wavelength of fibre Bragg grating is determined. The proposed algorithm is verified theoretically and experimentally for a single fibre Bragg grating peak. The accuracy has been obtained as ±0.2 pm. The proposed algorithm is adaptive in the sense that any random fibre Bragg grating peak can be identified within a short wavelength range.
EN
This article describes the application of Convolutional Neural Network in image processing and describes how it works. There are presented: network layers, types of activation functions, example of the AlexNet network architecture, the use of the loss function and the cross entropy method to calculate the loss during tests, L2 and Dropout methods used for weights regularization and optimization of the loss function using Stochastic Gradient Drop.
PL
Artykuł ten opisuje zastosowanie Konwolucyjnych Sieci Neuronowych w przetwarzaniu obrazów. W celu lepszego zrozumienia tematu opisano sposób działania sieci. Przedstawiono sieci wielowarstwowe, rodzaje funkcji aktywacji, przykład architektury sieci AlexNet. W artykule skupiono się na opisaniu wykorzystania funkcji straty oraz metody entropii krzyżowej do obliczenia straty w czasie testów. Opisano również sposoby normalizacji wag L2 i Dropout oraz optymalizację funkcji straty za pomocą Stochastycznego Spadku Gradientu.
3
Content available Image Inpainting with Gradient Attention
EN
We present a novel modification of context encoder loss function, which results in more accurate and plausible inpainting. For this purpose, we introduce gradient attention loss component of loss function, to suppress the common problem of inconsistency in shapes and edges between the inpainted region and its context. To this end, the mean absolute error is computed not only for the input and output images, but also for their derivatives. Therefore, model concentrates on areas with larger gradient, which are crucial for accurate reconstruction. The positive effects on inpainting results are observed both for fully-connected and fully-convolutional models tested on MNIST and CelebA datasets.
4
Content available remote On Loss Functions for Deep Neural Networks in Classification
EN
Deep neural networks are currently among the most commonly used classifiers. Despite easily achieving very good performance, one of the best selling points of these models is their modular design – one can conveniently adapt their architecture to specific needs, change connectivity patterns, attach specialised layers, experiment with a large amount of activation functions, normalisation schemes and many others. While one can find impressively wide spread of various configurations of almost every aspect of the deep nets, one element is, in authors’ opinion, underrepresented – while solving classification problems, vast majority of papers and applications simply use log loss. In this paper we try to investigate how particular choices of loss functions affect deep models and their learning dynamics, as well as resulting classifiers robustness to various effects. We perform experiments on classical datasets, as well as provide some additional, theoretical insights into the problem. In particular we show that L1 and L2 losses are, quite surprisingly, justified classification objectives for deep nets, by providing probabilistic interpretation in terms of expected misclassification. We also introduce two losses which are not typically used as deep nets objectives and show that they are viable alternatives to the existing ones.
EN
One of the requirements of the process approach is to identify the methods and evaluation criteria for process measurement. The effectiveness described as the ability to execute scheduled tasks and the objectives may be the measure used to evaluate processes. The article presents a few concepts of efficiency indicators that can be used in assessing the activities carried out within the framework of the implementation of new projects, according to APQP&PPAP guidelines. This paper proposes four concepts of indicators to assess the effectiveness of the above-described process, including index based on the Taguchi loss function.
6
Content available Decision theory under general uncertainty
EN
The exposure of Toyota management’s cover-up of its faulty car component problems raises a fundamental question: did Toyota management make an appropriate decision taking all uncertainties into account? Statistical decision theory is a framework with a probabilistic foundation, which admits random uncertainty about the real world and human thinking. In general, the uncertainty of the real world is diversified and therefore the effort of trying to deal with different forms of uncertainty with one special form of uncertainty, namely random uncertainty, may be oversimplified. In this paper, we introduce an axiomatic uncertain measure theoretical framework and explore the essential mechanism in formulating a general uncertainty decision theory. We expect that a new understanding of uncertainty and development of a corresponding new uncertainty decision-making approach may assist intelligence communities to survive and deal with the extremely tough and diverse aspects of an uncertain reality.
7
Content available remote Continuous Time Assumption in Insurance Premium Calculation and Bonus-Malus System
EN
In this paper we discuss the introduction of continuous time assumption in automobile insurance premium calculation system for quadratic and LINEX loss functions. Such assumption corresponds to the situation when information and the premiums are continually flowing to the insurer. A short discussion on "hunger for bonus effect" for a model with the quadratic loss function is also included. Then we incorporate continuous time assumption into the classical bonus - malus system.
EN
Paper presents theoretical fundamentals for process cost analysis as based on the loss functions proposed by Taguchi. The components of the loss function are taken into consideration as well as the cost of 100% quality inspection. The cost of the faulty products (predicted), products that were not accepted by customers, costs of selection and statistical monitoring are also estimated. An example to illustrate the problem has been provided.
PL
W pracy przedstawiono teoretyczne podstawy analizy kosztów zapewniania jakości oparte na zaproponowanej przez Taguchiego funkcji strat. Rozpatrzono składniki kosztów sterowania statystycznego oraz 100% inspekcji wyrobów. Uwzględniono koszty określające prawdopodobną liczbę braków, koszty wyrobów nieakceptowalnych przez konsumentów, koszty selekcji oraz koszty nadzorowania statystycznego. Znaczenie metody zostało zilustrowane przykładem obliczeniowym.
EN
The purpose of this paper is to determine one factor which represents the whole market behavior on the basis of the rates of return of all equities traded oo this market. In the seminaal Sharpe model the factor is an exogenous varialble which is not determined by the model itself. This paper extends Sharpe's idea, as it assumes that the factor is a linear combination of all the rates of return of all traded equities. To determine this coefiicients of this linear combination we minimize the loss function which expresses the weighted mean square deviation of all rates of return from their predictions, having given the linear combination form of the market index. It is found that the vector of linear coeffcients has to be a nonzero eigenvector associated with the maximal eigenvalue of the appropriately transformed and estimated covariance matrix. The optimal market index for the Warsaw Stock Exchange was compared with the standard index. It occurs that there is only a very small difference between the standard index of this market and the optimal index.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.