Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 11

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  kernel density estimation
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Lithium-ion batteries find extensive application in transportation, energy storage, and various other fields. However, gathering a significant volume of degradation data for the same type of lithium-ion battery devices becomes challenging in practice due to variations in battery operating conditions and electrochemical properties, among other factors. In this small sample situation, accurately predicting the remaining useful life (RUL) of the battery holds great significance. This paper presents a RUL prediction method that is based on data augmentation and similarity measures. Firstly, by utilizing the single exponential model and Sobol sampling techniques, it is possible to generate realistic degradation trajectories, even with just one complete run-to-failure degradation dataset. Subsequently, the similarity between the generated prediction reference trajectories and actual degradation trajectories is evaluated using the Pearson distance. Following that, the point estimation of RUL is performed through weighted averaging. Then, the uncertainty of the RUL predictions is quantified using kernel density estimation. Finally, the effectiveness of the proposed RUL prediction method is validated using two NASA lithium-ion battery datasets. Results demonstrate the practicality and effectiveness of the proposed method.
2
Content available remote Mining Cardinality Restrictions in OWL
EN
We present an approach to mine cardinality restriction axioms from an existing knowledge graph, in order to extend an ontology describing the graph. We compare frequency estimation with kernel density estimation as approaches to obtain the cardinalities in restrictions. We also propose numerous strategies for filtering obtained axioms in order to make them more available for the ontology engineer. We report the results of experimental evaluation on DBpedia 2016-10 and show that using kernel density estimation to compute the cardinalities in cardinality restrictions yields more robust results that using frequency estimation. We also show that while filtering is of limited usability for minimum cardinality restrictions, it is much more important for maximum cardinality restrictions. The presented findings can be used to extend existing ontology engineering tools in order to support ontology construction and enable more efficient creation of knowledge-intensive artificial intelligence systems.
EN
The paper presents the improvement of a multi-point method of identification of voltage fluctuation sources based on the analysis of voltage variability. The improvement consists in using the kernel density estimation for statistical analysis of voltage changes. At the beginning of the article the necessity of locating disturbing loads, resulting from the agreement between the power distributor and the power consumer, guaranteeing the power supply of the appropriate quality. The next part presents multi-point method using the analysis of voltage changes enabling aided of the location of disturbing loads. Problems were presented that could disturb the correct location process using this method. The results of simulation research are presented, showing the benefits of the proposed improvement of the multi-point method discussed. The possibility of automatic localization of voltage fluctuation sources and practical implementation of the method in measuring and recording instruments is discussed.
PL
W artykule przedstawiono ulepszenie wielopunktowej metody identyfikacji źródeł wahań napięcia bazującej na analizie zmienności napięcia. Ulepszenie polega na wykorzystaniu estymatora gęstości jądra do analizy statystycznej. Na początku artykułu przedstawiono konieczność lokalizacji niespokojnych odbiorników, wynikającą m.in. z umowy między dystrybutorem a konsumentem, gwarantującą dostarczenie energii elektrycznej o odpowiedniej jakości. W kolejnej części przedstawiono możliwości wsparcia procesu lokalizacji wielopunktową metodą wykorzystującą analizę zmienności amplitudy wahań napięcia. Przedstawiono problemy, mogące zaburzać przeprowadzenie poprawnego procesu lokalizacji z wykorzystaniem tej metody. Przedstawiono rezultaty badań symulacyjnych, pokazujące korzyści zaproponowanego ulepszenia omówionej metody wielopunktowej. Omówiono możliwości automatycznej lokalizacji źródeł wahań napięcia oraz praktycznej implementacji metody w przyrządach pomiarowo-rejestrujących.
EN
Field-programmable gate arrays (FPGA) technology can offer significantly higher performance at much lower power consumption than is available from single and multicore CPUs and GPUs (graphics processing unit) in many computational problems. Unfortunately, the pure programming for FPGA using hardware description languages (HDL), like VHDL or Verilog, is a difficult and not-trivial task and is not intuitive for C/C++/Java programmers. To bring the gap between programming effectiveness and difficulty, the high level synthesis (HLS) approach is promoted by main FPGA vendors. Nowadays, time-intensive calculations are mainly performed on GPU/CPU architectures, but can also be successfully performed using HLS approach. In the paper we implement a bandwidth selection algorithm for kernel density estimation (KDE) using HLS and show techniques which were used to optimize the final FPGA implementation. We are also going to show that FPGA speedups, comparing to highly optimized CPU and GPU implementations, are quite substantial. Moreover, power consumption for FPGA devices is usually much less than typical power consumption of the present CPUs and GPUs.
PL
W większości organizacji dokonuje się oceny pracowników na podstawie różnych kryteriów subiektywnych i obiektywnych. Często pracownicy czują się pokrzywdzeni oceną opisową lub ocena nie jest adekwatna do ich wyników pracy. W artykule proponujemy obiektywną metodę oceny pracowników z wykorzystaniem metod probabilistycznych, w tym funkcji gęstości prawdopodobieństwa, metod jądrowych oraz operacji arytmetycznych na zmiennych losowych. Omówiono również zastosowanie metody do budowania zespołu i jego oceny oraz wizualizacji wydajności prac zespołu oraz pracownika.
EN
In most of organizations evaluation of employees based on various criteria both subjective and objective is done. Employees feel often unfair by descriptive evaluation or the evaluation is not adequate to results of their work. In the publication we propose objective method of evaluation of employees based on probabilistic methods, including density estimation, kernel methods and arithmetic operations on random variables. In the paper we focus also on application method to build a team and evaluating it. The paper also introduces visualization of performance of both team and employee.
EN
The problem of estimation of the long-term environmental noise hazard indicators and their uncer- tainty is presented in the present paper. The type A standard uncertainty is defined by the standard deviation of the mean. The rules given in the ISO/IEC Guide 98 are used in the calculations. It is usually determined by means of the classic variance estimators, under the following assumptions: the normality of measurements results, adequate sample size, lack of correlation between elements of the sample and observation equivalence. However, such assumptions in relation to the acoustic measurements are rather questionable. This is the reason why the authors indicated the necessity of implementation of non-classical statistical solutions. An estimation idea of seeking density function of long-term noise indicators distri- bution by the kernel density estimation, bootstrap method and Bayesian inference have been formulated. These methods do not generate limitations for form and properties of analyzed statistics. The theoretical basis of the proposed methods is presented in this paper as well as an example of calculation process of expected value and variance of long-term noise indicators LDEN and LN. The illustration of indicated solutions and their usefulness analysis were constant due to monitoring results of traffic noise recorded in Cracow, Poland.
7
Content available Two stage EMG onset detection method
EN
Detection of the moment when a muscle begins to activate on the basis of EMG signal is important task for a number of biomechanical studies. In order to provide high accuracy of EMG onset detection, we developed novel method, that give results similar to that obtained by an expert. By means of this method, EMG is processed in two stages. The first stage gives rough estimation of EMG onset, whereas the second stage performs local, precise searching. The method was applied to support signal processing in biomechanical study concerning effect of body position on EMG activity and peak muscle torque stabilizing spinal column under static conditions.
8
Content available remote Short-Term Load Forecasting Based on Kernel Conditional Density Estimation
EN
A short-term load forecasting model based on the kernel estimation of the conditional probability density distribution is proposed. The pattern vector of the load time series sequence can be treated as the multivariate random variable whose value determines the pattern component values of the next sequence, which is forecasted. Probability density functions are obtained from historical load time series by means of nonparametric density estimation. This approach uses the product kernel estimators. The kernel function smoothing parameters are determined using cross-validation procedure. The suitability of the proposed approach is illustrated through applications to real load data.
PL
Proponuje się model prognostyczny do sporządzania krótkoterminowych prognoz obciążeń systemów elektroenergetycznych w oparciu o estymację jądrową rozkładu warunkowej gęstości prawdopodobieństwa. Wektor obrazu sekwencji szeregu czasowego obciążeń może być traktowany jako wielowymiarowa zmienna losowa, która determinuje wartość składowych obrazu następnej, prognozowanej sekwencji. Funkcje gęstości prawdopodobieństwa utworzono na podstawie historycznych szeregów czasowych obciążeń za pomocą estymacji nieparametrycznej. To podejście używa produktowych estymatorów jądrowych. Parametry wygładzania funkcji jądrowych określa się w procedurze walidacji krzyżowej. Użyteczność proponowanego podejścia zilustrowano aplikacjami do rzeczywistych danych.
9
Content available remote Probability Density Functions for Calculating Approximate Aggregates
EN
In the paper we show how one can use probability density function (PDF) for calculating approximate aggregates. The aggregates can be obtained very quickly and efficiently and there is no need to look through the large amount of data, as well as creating a sort of materialized aggregates (usually implemented as materialized views). Although the final results are only approximate, the method is extremely fast and can be successively used during initial phase of data exploration. We include simple experimental results which proof effectiveness of the method, especially if PDFs are typical, for example similar to Gaussian normal ones. If the PDFs differ from a normal distribution, one can consider making a proper preliminary transformation of the input variables or estimate PDFs by some nonparametric methods, for example using the so called kernel estimators. The later is used in the paper. To accelerate calculations, one can consider a usage of graphics processing unit (GPU). We point out this approach in the last section of the paper and give some preliminary results which are very promising.
PL
Na bazie danych rocznych przepływów maksymalnych niektórych rzek Polski dokonano porównania górnych kwantyli p% obliczonych tradycyjną metodą parametryczną (rozkład Pearsona III typu) i nieparametryczną metodą jądrową. (z asymetrycznym jądrem gamma K[GAM1]. W połowie przypadków metoda nieparametryczna wykazuje wielomodalny charakter rozkładu. Obliczone nieparametryczne kwantyle p[1%] i p[0,5%] w większości przypadków są wyższe od swoich parametrycznych odpowiedników.
EN
Based on yearly maximum discharge series on some rivers in Poland, a comparison of parametric upper quantiles (Pearson III type) and nonparametric (with the gamma kernel) method of probability distribution estimation was made. In half cases, the nonparametric approach showed multimodality of yearly flow distribution. It was also found that the calculated nonparametric upper 1% and 0,5% quantiles were in most causes higher that their parametric counterparts.
PL
Korzystając z rzeczywistych danych rocznych przepływów maksymalnych niektórych rzek Polski, dokonano porównania nieparametrycznej (z jądrem Gaussa) i parametrycznej metody estymacji funkcji rozkładu prawdopodobieństwa (rozkład Pearsona, typ III). W większości przypadków metoda nieparametryczna daje dwumodalny obraz rozkładu. Obliczone "nieparametryczne" kwantyle rzędu 1% i 0,5% są na ogół wyższe od swoich "parametrycznych" odpowiedników.
EN
Using yearly maximum discharge series on main rivers in Poland, a comparison was made of nonparametric (with the gaussian kernel) and parametric (Pearson III type) method of probability distribution estimation. In most cases, the nonparametric approach showed bimodality of yearly flow distribution. It was also found that the calculated nonparametric upper 1% and 0,5% ąuantiles were in general higher that their parametric counterparts.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.