Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 17

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  maximum likelihood estimation
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Time series models are a popular tool commonly used to describe time-varying phenomena. One of the most popular models is the Gaussian AR. However, when the data have outlier observations with "large" values, Gaussian models are not a good choice. We therefore abandon the assumption of normality of the data distribution and propose the AR model based on the double Pareto distribution. We introduce the estimators of the model's parameters, obtained by the maximum likelihood method. For this purpose, we use the Maclaurin series expansion and the Chebyshev polynomials expansion of the likelihood function. We compare the results with the Yule-Walker estimator in the finite variance case and with the modified Yule-Walker estimator in the infinite variance case. The accuracy of the results obtained was checked by Monte Carlo simulations.
PL
Modele szeregów czasowych to popularne narzędzie powszechnie stosowane do modelowania zjawisk zmiennych w czasie. Najpopularniejszym modelem jest gaussowski model AR, który jest stacjonarny. Jednak gdy w danych występują obserwacje odstające o „dużych“ wartościach, modele gaussowskie nie są odpowiednim narzędziem do ich modelowania. Odchodzimy zatem od założenia o normalności rozkładu danych i proponujemy model AR oparty na podwójnym rozkładzie Pareto. Przedstawiamy estymatory parametrów modelu, uzyskane metodą największej wiarygodności. W tym celu wykorzystujemy rozwinięcie funkcji warogodności w szereg zmodyfikowanym estymatorem Yule-Walkera w przypadku nieskończonej wariancji. Poprwaność otrzymanych wyników została sprawdzona za pomocą symulacji Monte Carlo.
EN
A new method of estimating the scale and shape parameters of the Weibull distribution is presented. According to this method, a Weibull distributed time-to-failure (TTF) of a test item is measured m times. It undergoes a minimal repair after each of the first m-1 failures, and is put out of use after the m-th failure. This procedure is repeated n times. Based on m TTFs of one test item, which are neither independent nor identically distributed (IID), the maximum likelihood estimators (MLE) of the scale and shape parameters, called n m-sample estimators, are obtained. The accuracy of the m-sample estimators is low, however, it can be improved by using the mean values of their n IID realizations as more precise estimators. The latter are called n·m-sample estimators, have the same biases as the respective m-sample ones, but their variances are n times smaller. Interestingly enough, the n·m-sample estimators of the scale and shape parameters, as well as their biases, are given by relatively simple explicit formulas. This is somewhat unexpected in view of the fact that the standard MLE of the shape parameter, based on IID TTFs of non-repairable test items, is obtained from an equation that cannot be solved analytically.
EN
The north-east sector of the Himalaya is one of the most active tectonic belts, with complex geological and tectonic features. The b-value and spatial correlation dimension (Dc) of earthquake distribution in the north-east Himalaya and its adjacent regions (20–32°N and 88–98°E) are estimated in the present study. Based on seismicity and faulting pattern, the region is divided into five active regions, namely the (i) South-Tibet, (ii) Eastern-Syntaxis, (iii) Himalayan-Frontal Arc, (iv) Arakan-Yoma belt and (v) Shillong-Plateau. A homogeneous catalogue of 1,416 earthquakes (mb ≥ 4.5) has been prepared from a revised catalogue of the ISC (International Seismological Centre). The b-value has been appraised by the maximum likelihood estimation method, while Dc values have been calculated by the correlation integral meth-od; b-values of 1.08 ± 0.09, 1.13 ± 0.05, 0.92 ± 0.05, 1.00 ± 0.03 and 0.98 ± 0.08 have been computed for the South-Tibet, Eastern-Syntaxis, Himalayan-Frontal Arc, Arakan-Yoma belt and Shillong-Plateau region, respectively. The Dc values computed for the respective regions are 1.36 ± 0.02, 1.74 ± 0.04, 1.57 ± 0.01, 1.8 ± 0.01, and 1.83 ± 0.02. These values are > 1.5, except for the South-Tibet (1.36 ± 0.02). The b-values around the global average value (1.0) reflect the stress level and seismic activity of the regions, while high Dc values refer to the heterogeneity of the seismogenic sources.
EN
In order to effectively monitor the wear and predict the life of cylinder liner, a nonlinear degradation model with multi-source uncertainty based on Wiener process is established to evaluate the remaining useful life (RUL) of cylinder liner wear. Due to complex service performance of cylinder liner, the uncertainty of operational environment and working conditions of cylinder liner wear are considered into the model by a random function. The probability density function (PDF) formula of RUL is derived, and the maximum likelihood estimation method is adopted to estimate the unknown parameters of PDF. Considering the evaluated parameters as the initial values, the model parameters are updated adaptively, and an adaptive PDF is obtained. Furthermore, the proposed model is compared with two classical degradation models. The results show that the proposed model has a good performance for predicting the life, and the error is within 5%. The method can provide a reference for condition monitoring of cylinder liner wear.
EN
Estimation of model parameters of sequential order statistics under linear and nonlinear link function assumptions is considered. Utilizing the arising curved exponential family structure, conditions for existence and uniqueness as well as the validity of asymptotic properties of maximum likelihood estimators are stated. Minimal sufficiency and completeness of the associated canonical statistics are discussed.
EN
Magnetic resonance imaging (MRI) based T1 mapping allows spatially resolved quantification of the tissue-dependent spin-lattice relaxation time constant T1, which is a potential biomarker of various neurodegenerative diseases, including Multiple Sclerosis, Alzheimer disease, and Parkinson's disease. In conventional T1 MR relaxometry, a quantitative T1 map is obtained from a series of T1-weighted MR images. Acquiring such a series, however, is time consuming. This has sparked the development of more efficient T1 mapping methods, one of which is a super-resolution reconstruction (SRR) framework in which a set of low resolution (LR) T1-weighted images is acquired and from which a high resolution (HR) T1 map is directly estimated. In this paper, the SRR T1 mapping framework is augmented with motion estimation. That is, motion between the acquisition of the LR T1-weighted images is modeled and the motion parameters are estimated simultaneously with the T1 parameters. Based on Monte Carlo simulation experiments, we show that such an integrated motion/relaxometry estimation approach yields more accurate T1 maps compared to a previously reported SRR based T1 mapping approach.
EN
The problem of parameter estimation for the polynomial in the input variables regression function is formulated and solved. The input and output variables of the regression function are multidimensional matrices. The parameters of the regression function are assumed to be random independent multidimensional matrices with Gaussian distribution and known mean value and variance matrices. The solution to this problem is a multidimensional-matrix system of the linear algebraic equations in multidimensional-matrix unknown regression function parameters. We consider the particular cases of constant, affine and quadratic regression function, for which we have obtained formulas for parameter calculation. Computer simulation of the quadratic regression function is performed for the two-dimensional matrix input and output variables.
8
Content available remote Real-time measurement of signal to noise ratio for harmonic signals
EN
The paper presents a new real time measuring method of the ratio of sinusoidal signal power to noise power. The recursive estimation procedure was developed on the basis of maximum-likelihood (ML) method with use of the stochastic approximation technique. The performance of the algorithm was checked by means of numerical simulations, which revealed its high efficiency and low numerical load, what enables to use it in real time systems.
PL
W artykule zaprezentowano nową metodę wyznaczania w czasie rzeczywistym stosunku sygnału do szumu w przypadku sygnałów harmonicznych. Rekursywna procedura estymacji została opracowana przy wykorzystaniu estymatora największej wiarygodności oraz metody aproksymacji stochastycznej. Wyniki badań symulacyjnych potwierdziły wysoką efektywność algorytmu i niewielkie obciążenie obliczeniowe.
9
Content available remote Asymptotics of Monte Carlo maximum likelihood estimators
EN
We describe Monte Carlo approximation to the maximum likelihood estimator in models with intractable norming constants and explanatory variables. We consider both sources of randomness (due to the initial sample and to Monte Carlo simulations) and prove asymptotical normality of the estimator.
EN
Generalized Gaussian distribution (GGD) includes specials cases when the shape parameter equals p = 1 and p = 2. It corresponds to Laplacian and Gaussian distributions respectively. For p → ∞, f(x) becomes a uniform distribution, and for p → 0, f(x) approaches an impulse function. Chapeau-Blondeau et al. considered another special case p = 0.5. The article discusses more peaky case in which GGD p = 1/3.
11
Content available Modelling air cargo market – a gravity method
EN
The gravity method and its expansions in transport economics, analogical to Newton’s gravity law are not the most modern approach to demand estimation. They are, however, simple and applicable to the publicly available data as those of EUROSTAT or national statistics sources. Main assumption is that the trips (or in this case flows of cargo) are produced at an origin and “attracted” to destination according to some identifiable pattern. It is possible to suggest – but not ensure as in Newton’s Physics – these patterns, provided a complete list of socioeconomic data (GDP, population, ...) or other quantifiable conditions in the range (catchment) of all locations (nodes), combined with a series of corresponding flow volumes using a non-linear equation. Each variable of the explanatory side of the equation is equipped with a weight (in econometrics called a parameter). Due to non-linear form of the equation parameters are found using an Iterative calculation that is typically performed by the Maximum Likelihood Estimation (popularized by Ronald Fisher). Validity of calculations are checked using statistical measures of proportionate reduction in uncertainty.
XX
Metoda grawitacyjna i jej inne wariacje w ekonomii transportu, będące analogią wobec newtonowskiego powszechnego prawa ciążenia, nie są najnowszym narzędziem do szacowania popytu. Są, natomiast, proste i możliwe do zastosowania mając do dyspozycji publicznie dostępne dane takie jak te pochodzące z EUROSTAT lub innych źródeł państwowych. Głównym założeniem metody jest to, że podróże (lub poziom przepływu towarów) są generowane w jednych lokalizacjach i są „przyciągane” do pozostałych lokalizacji zgodnie z pewnym identyfikowalnym wzorcem. Możliwe jest zaproponowanie – ale nie ustalenie jak w newtonowskiej fizyce – tych wzorców pod warunkiem, że jest dostępna pełna lista danych socjoekonomicznych (PKB, ludność, …) lub innych policzalnych warunków w zasięgu (strefie ciążenia) wszystkich lokalizacji (węzłów) powiązana z odpowiadającymi im poziomami przepływów przy użyciu funkcji nieliniowej. Każda ze zmiennych ulokowanych po stronie równania, która opisuje zjawisko (przepływy) jest wyposażona w wagę (w ekonometrii zwaną parametrem). Z powodu nieliniowej formy funkcji parametry są obliczane iteracyjnie, co jest najczęściej wykonywane metodą największej wiarygodności. Trafność dopasowania modelu jest sprawdzana metodami statystycznymi badającymi proporcje redukcji niepewności.
12
Content available remote Estimation based on sequential order statistics with random removals
EN
Suppose that n individuals are scrutinized in an experiment. Each failure is accompanied by a fixed number of removals. The experiment terminates after r (≤ n) failures. An explicit expression for the likelihood function of the available progressive sequential order statistics (PSOS) data is proposed. Under the conditional proportional hazard rate (CPHR) model, the maximum likelihood (ML) estimates of parameters are derived. Under the CPHR model and the assumption that the baseline distribution belongs to the Weibull family of distributions, the existence and uniqueness of the ML estimates are investigated. Moreover, two general classes of lifetime distributions, as an extension of theWeibull distribution, are studied in more detail. An algorithm for generating PSOS data under the CPHR model is proposed. Finally, some concluding remarks are given.
13
Content available remote A Resilience Parameter Model Generated by a Compound Distribution
EN
In this paper, we shall attempt to extend the generalized exponential geometric distribution of Silva et al. [1]. The new four-parameter distribution also generalizes the Weibull-geometric distribution of Barreto-Souza et al. [2], exponentiated Weibull, and several other lifetime distributions as special cases. A useful characteristic of the new distribution is that its failure rate function can have different shapes. We first study certain basic distributional properties of the new distribution and provide closed form expressions for its moment generating function and moments. General expressions are also obtained for the order statistics densities and stress-strength parameter. Our findings happen to enfold several known results as special cases. The model parameters are estimated by the maximum likelihood method and the Fisher information matrix is discussed. Finally, the model is applied to a real data set and its advantage over some rival models is illustrated.
EN
This article investigates identification of aircraft aerodynamic derivatives. The identification is performed on the basis of the parameters stored by Flight Data Recorder. The problem is solved in time domain by Quad-M Method. Aircraft dynamics is described by a parametric model that is defined in Body-Fixed-Coordinate System. Identification of the aerodynamic derivatives is obtained by Maximum Likelihood Estimation. For finding cost function minimum, Lavenberg-Marquardt Algorithm is used. Additional effects due to process noise are included in the state-space representation. The impact of initial values on the solution is discussed. The presented method was implemented in Matlab R2009b environment.
PL
Artykuł zawiera informacje na temat identyfikacji pochodnych aerodynamicznych. Estymacja opiera się o parametry zapisywane przez Pokładowy Rejestrator Lotu. Zagadnienie jest rozważane w dziedzinie czasu przy użyciu podejścia Quad-M. Do opisu dynamiki samolotu wykorzystano model parametryczny zdefiniowany w układzie sztywno związanym z samolotem. Do identyfikacji wykorzystano Metodę Największej Wiarygodności. Do znalezienia minimum funkcji celu użyto algorytm Levenberga-Marquardta. W modelu uwzględniono wpływ dodatkowych czynników reprezentowany przez szum przetwarzania. Omówiono wpływ wartości początkowych na rozwiązanie. Prezentowane wyniki uzyskano w środowisku Matlab R2009b.
15
Content available Lifetime distributions with wave-like bathtub hazard
EN
In this paper, we argue the necessity of dealing with lifetime distributions with wave-like bathtub hazard function. Four classes of wave-like bathtub hazards are investigated. For preparing maximum likelihood estimation of the hazard parameters, the first-order and second-order partial derivatives are derived.
PL
Przedstawiono historyczny rozwój metod analizy danych dyskretnych, dokonując podziału na modele, w których explicite wyróżnia się zmienną objaśnianą oraz modele, w których się tego nie czyni. Skupiono się nie tylko na problematyce związanej z budową samego modelu, ale również na jego estymacji i weryfikacji. W obrębie tych zagadnień (budowa modelu oraz estymacja i weryfikacja) zaakcentowano wady podejść i historyczne próby ich przezwyciężenia. Następnie podjęto zagadnienie niejednorodności obserwacji, wskazując sposoby radzenia sobie z nią. Omówienie możliwości praktycznego wykorzystania metod analizy danych dyskretnych ograniczono do zagadnień marketingowych.
EN
The paper presents historical development of the categorical data analysis for models with explicit response variables defined as well as models without such a distinction. Besides difficulties in model building we focus on methods and procedures for model testing and for the estimation of model parameters. Within these issues we emphasize the drawbacks of the models and historical trials to overcome them. The problem of data heterogeneity and methods that help to handle it were considered. Discussion of practical usefulness of categorical data analysis is limited to marketing problems.
17
EN
A law of iterated logarithm is established for the maximum likelihood estimator of the unknown parameter of the explosive Gaussian autoregressive process. Outside the Gaussian case, we show that the law of iterated logarithm does not hold, except for a suitable averaging on the maximum likelihood estimator.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.