Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 24

Liczba wyników na stronie
first rewind previous Strona / 2 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  bootstrap
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 2 next fast forward last
1
Content available remote A tale of two stations: a note on rejecting the Gumbel distribution
EN
The existence of an upper limit for extremes of quantities in the earth sciences, e.g. for river discharge or wind speed, is sometimes suggested. Estimated parameters in extreme-value distributions can assist in interpreting the behaviour of the system. Using simulation, this study investigated how sample size influences the results of statistical tests and related interpretations. Commonly used estimation techniques (maximum likelihood and probability-weighted moments) were employed in a case study; the results were applied in judging time series of annual maximum river flow from two stations on the same river, but with different lengths of observation records. The results revealed that sample size is crucial for determining the existence of an upper bound.
EN
Purpose: The development of technology has allowed creating and using the new, more complex computational tools in static and econometrics in recent years. Since then, resampling methods has become more popular techniques in estimating statistics from small samples. The aim of the article is to present and to compare the bootstrap and the jackknife methods in estimation of interested statistics with explaining and illustrating the usefulness and limitation in the context of using in econometric. Design/methodology/approach: To compare and present the methods, data of the length of bicycle paths divided into 371 polish counties from 2019 was received from Local Data Bank. From the data three samples were randomly selected and used as bootstrap and jackknife samples. Using the bootstrap and the jackknife simulations confidential intervals of the searching statistics with standard error were calculated. Results obtained for the methods were compared and described. Research limitations/implications: An analysis of these methods will allow improving the efficiency and reducing the error in estimating confidence intervals for searching statistics. Findings: As presented in the article, both the methods can be used to estimate mean, however, slightly better results are provided by the bootstrap. Furthermore, confidence intervals for confidence level at 95% created by these methods cover the population mean for each sample randomly selected from the population. To estimate standard deviation the better option is to choose the bootstrap method. Although, both confidence intervals for confidence at level 95% cover the population standard deviation, the bootstrap methods perform more accurate results with a smaller standard deviation. Originality/value: It was proven that the bootstrap method is slightly better in estimation confidence intervals based on the skewed data in comparison with the jackknife method.
EN
The stereological inverse problem of unfolding the distribution of spheres radii from measured planar sections radii, known as the Wicksell’s corpuscle problem, is considered. The construction of uniform confidence bands based on the smoothed bootstrap in the Wicksell’s problem is presented. Theoretical results on the consistency of the proposed bootstrap procedure are given, where the consistency of the bands means that the coverage probability converges to the nominal level. The finite-sample performance of the proposed method is studied via Monte Carlo simulations and compared with the asymptotic (non-bootstrap) solution described in literature.
4
Content available Flexible resampling for fuzzy data
EN
In this paper, a new methodology for simulating bootstrap samples of fuzzy numbers is proposed. Unlike the classical bootstrap, it allows enriching a resampling scheme with values from outside the initial sample. Although a secondary sample may contain results beyond members of the primary set, they are generated smartly so that the crucial characteristics of the original observations remain invariant. Two methods for generating bootstrap samples preserving the representation (i.e., the value and the ambiguity or the expected value and the width) of fuzzy numbers belonging to the primary sample are suggested and numerically examined with respect to other approaches and various statistical properties.
5
EN
Cross validation is often used to split input data into training and test sets in support vector machines. The two most commonly used cross validation versions are the tenfold and leave-one-out cross validation. Another commonly used resampling method is the random test/train split. The advantage of these methods is that they avoid overfitting in a model and perform model selection. However, they can increase the computational time for fitting support vector machines by increasing the size of the dataset. In this research, we propose an alternative for fitting SVM, which we call the tenfold bootstrap for support vector machines. This resampling procedure can significantly reduce execution time despite the large number of observations, while preserving a model’s accuracy. With this finding, we propose a solution to the problem of slow execution time when fitting support vector machines on big datasets.
EN
The minimum size of the bootstrap algorithm input parameters have been determined for estimation of long-term indicators of road traffic noise. Two independent simulation experiments have been performed for that purpose. The first experiment served to determine the impact of original random sample size, and the second to determine the impact of number of the bootstrap replications on the accuracy and uncertainty of estimation of long-term noise indicators. The inference has been carried out based on results of non-parametric statistical test at significance level α = 0:05. The simulation experiments have shown that estimation of long-term noise indicators with uncertainty below ±1 dB(A) requires all-day noise measurements during three randomly selected days during the year in a dense urban development. The maximum size of original random sample should not exceed n = 50 elements. The minimum number of bootstrap replications necessary for estimation should be B = 5000. The data used to the simulation experiments and carry out the analysis were results of continuous monitoring of road traffic noise recorded in 2009 in one of the main arteries of Krakow in Poland.
EN
The paper considers the use of the bootstrap method to improve the determination of confidence intervals identified by the DOE (design of experiment) procedure. Two different approaches have been used: one that is appropriate for factorial designs and the other one relevant to the methodology of the response surface. Both approaches were tested on the real experiment datasets and compared with the results obtained from the classical statistical expressions based on well known asymptotic formulas derived from the distribution.
EN
Shainin's component search procedure uses variability source detection based on specific median test. This approach has only two triple subsets and the certainty of inference can be weak for this reason. This paper checks this approach by series of numerical simulations.
EN
We investigate the variability of one of the most often used complexity measures in the analysis of the time series of RR intervals, i.e. Sample Entropy. The analysis is carried out for a dense matrix of possible r thresholds in 79 24h recordings, for segments consisting of 5000 consecutive beats, randomly selected from the whole recording. It is repeated for the same recordings in random order. This study is made possible by the novel NCM algorithm which is many orders of magnitude faster than the alternative approaches. We find that the bootstrapped standard errors for Sample entropy are large for RR intervals in physiological order compared to the standard errors for shuffled data which correspond to the maximum available entropy. This result indicates that Sample Entropy varies widely over the circadian period. This paper is purely methodological and no physiological interpretations are attempted.
10
EN
The capacity of recently-developed extreme learning machine (ELM) modelling approaches in forecasting daily urban water demand from limited data, alone or in concert with wavelet analysis (W) or bootstrap (B) methods (i.e., ELM, ELMW, ELMB), was assessed, and compared to that of equivalent traditional artificial neural network-based models (i.e., ANN, ANNW, ANNB). The urban water demand forecasting models were developed using 3-year water demand and climate datasets for the city of Calgary, Alberta, Canada. While the hybrid ELMB and ANNB models provided satisfactory 1-day lead-time forecasts of similar accuracy, the ANNW and ELMW models provided greater accuracy, with the ELMW model outperforming the ANNW model. Significant improvement in peak urban water demand prediction was only achieved with the ELMW model. The superiority of the ELMW model over both the ANNW or ANNB models demonstrated the significant role of wavelet transformation in improving the overall performance of the urban water demand model.
PL
Oceniono zdolność modelowania z użyciem ekstremalnej maszyny uczącej się (ELM) stosowanej samodzielnie bądź w połączeniu z analizą falkową (W) lub metodami bootstrapowymi (B) (tzn. ELM, ELMW, ELMB) do przewidywania dobowego zapotrzebowania na wodę w mieście. Wyniki porównano z uzyskanymi tradycyjnymi metodami bazującymi na sztucznych sieciach neuronowych (tzn. ANN, ANNW, ANNB). Modele przewidujące zapotrzebowanie na wodę zbudowano z wykorzystaniem trzyletniego zapotrzebowania na wodę i zestawu danych klimatycznych dla miasta Calgary w kanadyjskiej prowincji Alberta. Hybrydowe modele ELMB i ANNB zapewniały satysfakcjonujące prognozy jednodniowe o podobnej dokładności, natomiast wyniki uzyskane z zastosowaniem modeli ELMW i ANNW były bardziej dokładne, przy czym model ELMW okazał się lepszy niż ANNW. Istotną poprawę prognozowania szczytowego zapotrzebowania na wodę w mieście uzyskano jedynie z zastosowaniem modelu ELMW. Wyższość modelu ELMW nad modelami ANNW czy ANNB dowodzi znaczącej roli transformacji falkowej w usprawnianiu działania modeli prognozujących zapotrzebowanie na wodę w mieście.
PL
Artykuł przedstawia analizę porównawczą technologii jQuery Mobile oraz frameworka Bootstrap w procesie wytwarzania stron responsywnych. Porównanie odbywa się na podstawie dwóch stron wykonanych w tych obu technologiach. Kryteriami analizy są: kompatybilność z urządzeniami mobilnymi, ułatwienia i gotowe komponenty, szybkość ładowania na różnych urządzeniach, wielkość kodu oraz zgodność ze standardami W3C i Google. Każda z technologii w dużym stopniu ułatwia proces tworzenia stron, aczkolwiek jQuery Mobile jest narzędziem znacznie bardziej rozbudowanym.
EN
Article presents comparative analysis of jQuery Mobile library and Bootstrap framework in responsive site development. The comparative based on two websites, made in these technologies. Criterions of analysis are: compatibility of mobile devices, prepared components, loading speed on different devices, code size and compatibility with W3C and Google standards. Both technologies in a large degree helps in process of websites development, although jQuery Mobile is more expanded.
12
Content available remote The fuzzy interpretation of the statistical test for irregular data
EN
The well-known statistical tests have been developed on the basis of many additional assumptions, among which the normality of a data source distribution is one of the most important. The outcome of a test is a p-value which may is interpreted as an estimation of a risk for a false negative decision i.e. it is an answer to the question “how much do I risk if I deny?”. This risk estimation is a base for a decision (after comparing with a significance level α): reject or not. This sharp trigger – p-level greater than α or not – ignores the fact that a context is rather smooth and evolves from “may be” through “rather not” to “certainly not”. An alternative option for such assessments is proposed by a fuzzy statistics, particularly by Buckley’s approach. The fuzzy approach introduces a better scale for expressing decision uncertainty. This paper compares three approaches: a classic one based on a normality assumption, Buckley’s theoretical one and a bootstrap-based one.
PL
Powszechnie znane testy statystyczne były opracowane przy wielu dodatkowych założeniach. Jednym z najważniejszych jest normalność rozkładu populacji źródłowej. Wynikiem testu jest wartość p, która jest interpretowana jako ocena ryzyka decyzji fałszywie negatywnej, tj. jest to odpowiedź na pytanie „ile ryzykuję jeżeli neguję?”. Ta ocena ryzyka jest podstawą do podjęcia decyzji (po porównaniu z krytycznym poziomem istotności α): odrzucić czy nie. Takie ostre przełączenie – wartość p większa od α czy też nie – ignoruje fakt, że kontekst jest raczej gładki i ewoluuje od „może tak” przez „raczej nie” do „zdecydowanie nie”. Alternatywą dla takich ocen jest statystyka rozmyta, a szczególnie podejście Buckleya. Podejście rozmyte wprowadza lepszą skalę do wyrażenia niepewności decyzji. Niniejszy artykuł porównuje trzy podejścia: klasyczne zakładające normalność, teoretyczne Buckleya i bootstrapowe.
Logistyka
|
2015
|
nr 4
9502--9508, CD3
PL
Prognozowanie wielkości sprzedaży produktów ma kluczowe znaczenie dla każdego dostawcy, producenta i sprzedawcy. Prognozy zapotrzebowania w przyszłości będą decydować o ilości, które powinny być zakupione, wyprodukowane lub dostarczone. By wyznaczyć prognozy punktowe zazwyczaj wykorzystuje się różne metody jak np. metoda naiwna, modele Holta i Wintersa, modele ARiMA, GARCH a także różne metody symulacyjne. W praktyce może się jednak okazać, że prognoza punktowa nie jest wystarczająca. Bardzo użyteczne w takiej sytuacji okazują się być prognozy przedziałowe, które wskazują prawdopodobieństwo z jakim prognozowana wartość znajdzie się w określonym przedziale. W artykule podjęta zostanie próba wykorzystania metod symulacyjnych do wyznaczania prognoz przedziałowych wielkości sprzedaży.
EN
Forecasting product sales volumes is crucial for every supplier, manufacturer and retailer. Forecasts of demand in the future will determine the amount that should be purchased, produced or delivered. To determine the point forecasts typically uses a variety of methods such as naïve method, models of Holt and Winters, ARIMA, GARCH as well as various simulation methods. In practice, it may turn out that the forecast point is not enough. Very useful in this situation turn out to be interval forecasts, which indicate the probability with which the predicted value is in the specified range. In this paper an attempt to use simulation methods is made to determine the interval forecasts of sales volume.
EN
In this paper, we consider a nonparametric Shewhart chart for fuzzy data. We utilize the fuzzy data without transforming them into a real-valued scalar (a representative value). Usually fuzzy data (described by fuzzy random variables) do not have a distributional model available, and also the size of the fuzzy sample data is small. Based on the bootstrap methodology, we design a nonparametric Shewhart control chart in the space of fuzzy random variables equipped with some L2 metric, in which a novel approach for generating the control limits is proposed. The control limits are determined by the necessity index of strict dominance combined with the bootstrap quantile of the test statistic. An in-control bootstrap ARL of the proposed chart is also considered.
15
PL
Aplikacja webowa – Cmentarze-24.pl, powstała jako efekt współpracy autorów nad pracą dyplomową, obronioną w lutym, 2015 roku. Celem opracowanej aplikacji, było dostarczenie użytkownikom Internetu rozwiązania pozwalającego na łatwe i intuicyjne zlokalizowanie cmentarza oraz miejsca pochówku – grobu, osoby zmarłej. W realizacji aplikacji zastosowano liczące się obecnie technologie IT, jak system zarządzania nierelacyjną bazą danych MongoDB, framework Codeigniter dla modelu MVC aplikacji, Twitter Bootstrap dla uzyskania GUI responsywnie dostosowującego się do ekranu urządzenia, na którym użytkownik uruchomi aplikację, oraz Google API dla zlokalizowania dokładnego położenia cmentarza i danego grobu. Artykuł przybliża główne założenia dla powstania aplikacji oraz funkcjonalność jej wybranych elementów.
EN
Web application – Cmentarze-24.pl, has been created as a result of cooperation between the authors of the thesis, passed in February 2015. The aim of the developed application is to provide for Internet users the solution allowed for easy and intuitive localization a cemetery or a burial place – a grave of died person. In the application there are used a significant now IT technologies, e.g.: no relational database management system MongoDB, framework Codeigniter for MVC model of the application, Twitter Bootstrap for responsive GUI effect adjust any screen of user equipment, Google API for precise localization of a cemetery or/and a grave. The article presents the main assumptions for the creation of application and the functionality of selected parts.
EN
The problem of estimation of the long-term environmental noise hazard indicators and their uncer- tainty is presented in the present paper. The type A standard uncertainty is defined by the standard deviation of the mean. The rules given in the ISO/IEC Guide 98 are used in the calculations. It is usually determined by means of the classic variance estimators, under the following assumptions: the normality of measurements results, adequate sample size, lack of correlation between elements of the sample and observation equivalence. However, such assumptions in relation to the acoustic measurements are rather questionable. This is the reason why the authors indicated the necessity of implementation of non-classical statistical solutions. An estimation idea of seeking density function of long-term noise indicators distri- bution by the kernel density estimation, bootstrap method and Bayesian inference have been formulated. These methods do not generate limitations for form and properties of analyzed statistics. The theoretical basis of the proposed methods is presented in this paper as well as an example of calculation process of expected value and variance of long-term noise indicators LDEN and LN. The illustration of indicated solutions and their usefulness analysis were constant due to monitoring results of traffic noise recorded in Cracow, Poland.
EN
A class of approximately locally most powerful type tests based on ranks of residuals is suggested for testing the hypothesis that the regression coefficient is constant in a standard regression model against the alternatives that a random walk process generates the successive regression coe?cients. We derive the asymptotic null distribution of such a rank test. This distribution can be described as a generalization of the asymptotic distribution of the Cramer-von Mises test statistic. However, this distribution is quite complex and involves eigen values and eigen functions of a known positive defnite kernel, as well as the unknown density function of the error term. It is then natural to apply bootstrap procedures. Extending a result due to Shorack in [25], we have shown that the weighted empirical process of residuals can be bootstrapped, which solves the problem of finding the null distribution of a rank test statistic. A simulation study is reported in order to judge performance of the suggested test statistic and the bootstrap procedure.
18
Content available Smoothed estimator of the periodic hazard function
EN
A smoothed estimator of the periodic hazard function is considered and its asymptotic probability distribution and bootstrap simultaneous confidence intervals are derived. Moreover, consistency of the bootstrap method is proved and some applications of the developed theory are presented. The bootstrap method is based on the phase-consistent resampling scheme developed in Dudek and Leśkow [6].
EN
In non-randomised studies, prioritisation of patients who are most likely to benefit from more expensive and more effective treatments usually take place and/or patients select themselves to treatments. Propensity score methods have been considered as means to reduce the effect of selection bias. In this study it was shown that use of receiver operating characteristics (ROC) and area under ROC (AUC) provides an additional insight into analysis of non-randomised studies. The estimates of mean effect obtained with five different techniques were compared and nonparametric bootstrap was recommended as superior tool for propensity score analyses.
20
Content available remote Odkrycie asymptotycznej swobody i narodziny QCD
first rewind previous Strona / 2 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.