Ograniczanie wyników
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!

Znaleziono wyników: 19

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  maximum likelihood
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Urban land-cover change is increasing dramatically in most emerging countries. In Iraq and in the capital city (Baghdad). Active socioeconomic progress and political stability have pushed the urban border into the countryside at the cost of natural ecosystems at ever- growing rates. Widely used classifier of Maximum Likelihood was used for classification of 2003 and 2021 Landsat images. This classifier achieved 83.20% and 99.58% overall accuracies for 2003 and 2021 scenes, respectively. This study found that the urban area decreases by 16.4% and the agriculture area decrease by 5.4% over the period. On the other hand, barren land has been expanded up to more than 7% as well as increasing in water land that should probably due to flooding (almost 15% more than 2003). To reduce the undesirable effects of land-cover changes over urban ecosystems in Baghdad and in the municipality in specific, it is suggested that Baghdad develops an urban development policy. The emphasis of policy must be the maintenance an acceptable balance among urban infrastructure development, ecological sustainability and agricultural production.
EN
Quick development of computer techniques and increasing computational power allow for building high-fidelity models of various complex objects and processes using historical data. One of the processes of this kind is an air traffic, and there is a growing need for traffic mathematical models as air traffic is increasing and becoming more complex to manage. This study concerned the modelling of a part of the arrival process. The first part of the research was air separation modelling by using continuous probability distributions. Fisher information matrix was used for the best fit selection. The second part of the research consisted of applying regression models that best match the parameters of representative distributions. Over a dozen airports were analyzed in the study and that allowed to build a generalized model for aircraft air separation in function of traffic intensity. Results showed that building a generalized model which comprises traffic from various airports is possible. Moreover, aircraft air separation can be expressed by easy to use mathematical functions. Models of this kind can be used for various applications, e.g.: air separation management between aircraft, airports arrival capacity management, and higher-level air traffic simulation or optimization tasks.
EN
The Marina area represents an official new gateway of entry to Egypt and the development of infrastructure is proceeding rapidly in this region. The objective of this research is to obtain building data by means of automated extraction from Pléiades satellite images. This is due to the need for efficient mapping and updating of geodatabases for urban planning and touristic development. It compares the performance of random forest algorithm to other classifiers like maximum likelihood, support vector machines, and backpropagation neural networks over the well-organized buildings which appeared in the satellite images. Images were subsequently classified into two classes: buildings and non-buildings. In addition, basic morphological operations such as opening and closing were used to enhance the smoothness and connectedness of the classified imagery. The overall accuracy for random forest, maximum likelihood, support vector machines, and backpropagation were 97%, 95%, 93% and 92% respectively. It was found that random forest was the best option, followed by maximum likelihood, while the least effective was the backpropagation neural network. The completeness and correctness of the detected buildings were evaluated. Experiments confirmed that the four classification methods can effectively and accurately detect 100% of buildings from very high-resolution images. It is encouraged to use machine learning algorithms for object detection and extraction from very high-resolution images.
EN
Land cover mapping of marshland areas from satellite images data is not a simple process, due to the similarity of the spectral characteristics of the land cover. This leads to challenges being encountered with some land covers classes, especially in wetlands classes. In this study, satellite images from the Sentinel 2B by ESA (European Space Agency) were used to classify the land cover of Al Hawizeh marsh/Iraq Iran border. Three classification methods were used aimed at comparing their accuracy, using multispectral satellite images with a spatial resolution of 10 m. The classification process was performed using three different algorithms, namely: Maximum Likelihood Classification (MLC), Artificial Neural Networks (ANN), and Support Vector Machine (SVM). The classification algorithms were carried out using ENVI 5.1 software to detect six land cover classes: deep water marsh, shallow water marsh, marsh vegetation (aquatic vegetation), urban area (built up area), agriculture area, and barren soil. The results showed that the MLC method applied to Sentinel 2B images provides a higher overall accuracy and the kappa coefficient compared to the ANN and SVM methods. Overall accuracy values for MLC, ANN, and SVM methods were 85.32%, 70.64%, and 77.01% respectively.
EN
In this paper, two sets of multisine signals are designed for system identification purposes. The first one is obtained without any information about system dynamics. In the second case, the a priori information is given in terms of dimensional stability and control derivatives. Magnitude Bode plots are obtained to design the multisine power spectrum that is optimized afterwards. A genetic algorithm with linear ranking, uniform crossover and mutation operator has been employed for that purpose. Both designed manoeuvres are used to excite the aircraft model, and then system identification is performed. The estimated parameters are obtained by applying two methods: Equation Error and Output Error. The comparison of both investigated cases in terms of accuracy and manoeuvre time is presented afterwards.
6
Content available Output Error Method for Tiltrotor Unstable in Hover
EN
This article investigates unstable tiltrotor in hover system identification from flight test data. The aircraft dynamics was described by a linear model defined in Body-Fixed-Coordinate System. Output Error Method was selected in order to obtain stability and control derivatives in lateral motion. For estimating model parameters both time and frequency domain formulations were applied. To improve the system identification performed in the time domain, a stabilization matrix was included for evaluating the states. In the end, estimates obtained from various Output Error Method formulations were compared in terms of parameters accuracy and time histories. Evaluations were performed in MATLAB R2009b environment.
EN
Experimental and numerical study of the steady-state cyclonic vortex from isolated heat source in a rotating fluid layer is described. The structure of laboratory cyclonic vortex is similar to the typical structure of tropical cyclones from observational data and numerical modelling including secondary flows in the boundary layer. Differential characteristics of the flow were studied by numerical simulation using CFD software Flow Vision. Helicity distribution in rotating fluid layer with localized heat source was analysed. Two mechanisms which play role in helicity generation are found. The first one is the strong correlation of cyclonic vortex and intensive upward motion in the central part of the vessel. The second one is due to large gradients of velocity on the periphery. The integral helicity in the considered case is substantial and its relative level is high.
8
Content available remote A comparison of three approaches to non-stationary flood frequency analysis
EN
Non-stationary flood frequency analysis (FFA) is applied to statistical analysis of seasonal flow maxima from Polish and Norwegian catchments. Three non-stationary estimation methods, namely, maximum likelihood (ML), two stage (WLS/TS) and GAMLSS (generalized additive model for location, scale and shape parameters), are compared in the context of capturing the effect of non-stationarity on the estimation of time-dependent moments and design quantiles. The use of a multimodel approach is recommended, to reduce the errors due to the model misspecification in the magnitude of quantiles. The results of calculations based on observed seasonal daily flow maxima and computer simulation experiments showed that GAMLSS gave the best results with respect to the relative bias and root mean square error in the estimates of trend in the standard deviation and the constant shape parameter, while WLS/TS provided better accuracy in the estimates of trend in the mean value. Within three compared methods the WLS/TS method is recommended to deal with non-stationarity in short time series. Some practical aspects of the GAMLSS package application are also presented. The detailed discussion of general issues related to consequences of climate change in the FFA is presented in the second part of the article entitled “Around and about an application of the GAMLSS package in non-stationary flood frequency analysis”.
EN
A statistical analysis is presented of tensile and bending strengths of a porous sintered structural steel which exhibits non-linear, quasi-brittle, behaviour. It is the result of existing natural flaws (pores and oxide inclusions) and of the formation of fresh flaws when stress is applied. The analysis is by two- and three-parameter Weibull statistics. Weibull modulus, a measure of reliability, was estimated by the maximum likelihood method for specimen populations < 30. Probability distributions were compared on the basis of goodness to fit using the Anderson-Darling tests. The use of the two-parameter Weibull distribution for strength data of quasi-brittle sintered steels is questioned, because there is sufficient evidence that the 3-parameter distribution fits the data better.
EN
Changes in river flow regime resulted in a surge in the number of methods of non-stationary flood frequency analysis. Common assumption is the time-invariant distribution function with time-dependent location and scale parameters while the shape parameters are time-invariant. Here, instead of location and scale parameters of the distribution, the mean and standard deviation are used. We analyse the accuracy of the two methods in respect to estimation of time-dependent first two moments, time-invariant skewness and time-dependent upper quantiles. The method of maximum likelihood (ML) with time covariate is confronted with the Two Stage (TS) one (combining Weighted Least Squares and L-moments techniques). Comparison is made by Monte Carlo simulations. Assuming parent distribution which ensures the asymptotic superiority of ML method, the Generalized Extreme Value distribution with various values of linearly changing in time first two moments, constant skewness, and various time-series lengths are considered. Analysis of results indicates the superiority of TS methods in all analyzed aspects. Moreover, the estimates from TS method are more resistant to probability distribution choice, as demonstrated by Polish rivers’ case studies.
EN
The work investigates the spatial correlation of the data collected along orbital tracks of Mars Orbiter Laser Altimeter (MOLA) with a special focus on the noise variance problem in the covariance matrix. The problem of different correlation parameters in along-track and crosstrack directions of orbital or profile data is still under discussion in relation to Least Squares Collocation (LSC). Different spacing in along-track and transverse directions and anisotropy problem are frequently considered in the context of this kind of data. Therefore the problem is analyzed in this work, using MOLA data samples. The analysis in this paper is focused on a priori errors that correspond to the white noise present in the data and is performed by maximum likelihood (ML) estimation in two, perpendicular directions. Additionally, correlation lengths of assumed planar covariance model are determined by ML and by fitting it into the empirical covariance function (ECF). All estimates considered together confirm substantial influence of different data resolution in along-track and transverse directions on the covariance parameters.
EN
A common method used to obtain 3D range data with a 2D laser range finder is to rotate the sensor. To combine the 2D range data obtained at different rotation angles into a common 3D coordinate frame, the axis of rotation relative to the mirror center of the laser range finder should be known. This axis of rotation is a line in 3D space with four degrees of freedom. This paper describes a method for recovering the parameters of this rotational axis, as well as the extrinsic calibration between the rota tional axis and a camera. It simply requires scanning several planar checkerboard patterns that are also imaged by a static camera. In particular, we use only correspondences between lines in the laser scans and planes in the camera images, which can be established easily even for non-visible lasers. Furthermore, we show that such line-on-plane correspondences can be modelled as pointplane constraints, a problem studied in the field of robot kinematics. We use a numerical solution developed for such point-plane constraint problems to obtain an initial estimate, which is then refined by a nonlinear minimiza- tion that minimizes the ,,line-of-sight" errors in the laser scans and the reprojection errors in the camera image. To validate our proposed method, we give experimental results using a LMS-100 mounted on a pan-tilt device in a nodding configuration.
13
Content available remote Shale volume estimation based on the factor analysis of well-logging data
EN
In the paper factor analysis is applied to well-logging data in order to extract petrophysical information about sedimentary structures. Statistical processing of well logs used in hydrocarbon exploration results in a factor log, which correlates with shale volume of the formations. The so-called factor index is defined analogously with natural gamma ray index for describing a linear relationship between one special factor and shale content. Then a general formula valid for a longer depth interval is introduced to express a nonlinear relationship between the above quantities. The method can be considered as an independent source of shale volume estimation, which exploits information inherent in all types of well logs being sensitive to the presence of shale. For demonstration, two wellbore data sets originated from different areas of the Pannonian Basin of Central Europe are processed, after which the shale volume is computed and compared to estimations coming from independent inverse modeling.
EN
In this paper, an improved expectation maximization (EM) algorithm called statistical histogram based expectation maximization (SHEM) algorithm is presented. The algorithm is put forward to overcome the drawback of standard EM algorithm, which is extremely computationally expensive for calculating the maximum likelihood (ML) parameters in the statistical segmentation. Combining the SHEM algorithm and the connected threshold region-growing algorithm that is used to provide a priori knowledge, a novel statistical approach for segmentation of brain magnetic resonance (MR) image data is thus proposed. The performance of our SHEM based method is compared with those of the EM based method and the commonly applied fuzzy C-means (FCM) segmentation. Experimental results show the proposed approach to be effective, robust and significantly faster than the conventional EM based method.
15
Content available remote Probabilistic deformable models for weld defect contour estimation in radiography
EN
This paper describes a novel method for segmentation of weld defect in radiographic images. Contour estimation is formulated as a statistical estimation problem, where both the contour and the observation model parameters are unknown. Our approach can be described as a region-based maximum likelihood formulation of parametric deformable contours. This formulation provides robustness against the poor image quality, and allows simultaneous estimation of the contour parameters together with other parameters of the model. Implementation is performed by a deterministic iterative algorithm with minimal user intervention. Results testify very good performance of such contour estimation approach.
16
Content available remote Node assignment problem in Bayesian networks
EN
This paper deals with the problem of searching for the best assignments of random variables to nodes in a Bayesian network (BN) with a given topology. Likelihood functions for the studied BNs are formulated, methods for their maximization are described and, finally, the results of a study concerning the reliability of revealing BNs’ roles are reported. The results of BN node assignments can be applied to problems of the analysis of gene expression profiles.
PL
Celem artykułu jest prezentacja modeli zmiennych dychotomicznych: logitowego i probilowego oraz zwrócenie uwagi na ich szerokie zastosowanie w różnych dziedzinach nauk. W artykule wykorzystano model regresji probitowej do wyznaczenia prawdopodobieństwa przyjęcia kandydata na Wydział Ekonomii, specjalność Handel i spółdzielczość, Uniwersytetu Rzeszowskiego.
EN
The aim of this article is presentation of logit and probit models and their wide application in many nt science. Logit and probit regression arę used for analyzing the relationship between one or morę ent yariables with categorical dependent yariable. There arę a lot of advantages of logit (probit) fels over linear multiple regression. These methods imply that the dependent yariable is actually the t of a transformation of an underlying yariable, which is not restricted in range. For example, the bit model assumes that the actual underlying depedent yariable is measured in terms of values for curve; if one transforms those values for probabilities then the predictions for the dependent ble will always fali between O ond 1. Thus, we arę actually predicting probabilities from the inde-nt yariables The probit model was used to calculate the probability of admissions in Rzeszów zUniwersity, speciality Handel i spółdzielczość.
EN
It is hypothesized that the impulse response of a linearized convective diffusion wave (CD) model is a probability distribution suitable for flood frequency analysis. This flood frequency model has two parameters, which are derived using the methods of moments and maximum likelihood. Also derived are errors in quantiles for these methods of parameter estimation. The distribution shows an equivalency of the two estimation methods with respect to the mean value - an important property in the case of unknown true distribution function. As the coefficient of variation tends to zero (with the mean fixed), the distribution tends to a normal one, similar to the lognormal and gamma distributions.
EN
Asymptotic bias in large quantiles and moments for three parameter estimation methods, including the maximum likelihood method (MLM), moments method (MOM) and linear moments method (LMM), is derived when a probability distribution function (PDF) is falsely assumed. It is illustrated using an alternative set of PDFs consisting of five two-parameter PDFs that are lower-bounded at zero, i.e., Log-Gumbel (LG), Log-logistic (LL), Log-normal (LN), Linear Diffusion (LD) and Gamma (Ga) distribution functions. The stress is put on applicability of LG and LL in the real conditions, where the hypothetical distribution (H) differs from the true one (T). Therefore, the following cases are considered: H=LG; T=LL, LN, LD and Ga, and H=LL, LN, LD and Ga, T=LG. It is shown that for every pair (H; T) and for every method, the relative bias (RB) of moments and quantiles corresponding to the upper tail is an increasing function of the true value of the coefficient of variation (cv), except that RB of moments for MOM is zero. The value of RB is smallest for MOM and the largest for MLM. The bias of LMM occupies an intermediate position. Since MLM used as the approximation method is irreversible, the asymptotic bias of the MLM-estimate of any statistical characteristic is not asymmetric as is for the MOM and LMM. MLM turns out to be the worst method if the assumed LG or LL distribution is not the true one. It produces a huge bias of upper quantiles, which is at least one order higher than that of the other two methods. However, the reverse case, i.e., acceptance of LN, LD or Ga as a hypothetical distribution while LG or LL as the true one, gives the MLM-bias of reasonable magnitude in upper quantiles. Therefore, one should be highly reluctant in choosing the LG and LL in flood frequency analysis, especially if MLM is to be applied.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.