Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 371

Liczba wyników na stronie
first rewind previous Strona / 19 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  uncertainty
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 19 next fast forward last
EN
Current situation with the fleet of transport and technological machinery in Ukraine requires an individual approach to evaluating the effectiveness of its maintenance and repair system. The article is devoted to the issue of selecting the most effective option for the use of transport and technological machinery taking into account the specific conditions of its operation characterized by certain risks and uncertainties, and considering the real volumes and age structure of the fleet. Solving this problem requires substantiation of management models of maintenance and repair processes. It is necessary to provide scientifically based methods of managing the system of technical maintenance (TM) and repair of transport and technological machinery (RTTM) using specific methods of an individual approach to the technical and economic evaluation of the effectiveness of maintenance and RTTM processes which are adapted to the modern conditions of its operation. The article presents the results of the research, which was carried out using the basics of system analysis, the theory of decision-making under the conditions of uncertainty, and the basics of multi-criteria analysis. In the course of research, an analytical management model of the maintenance and repair system operation was formed that reveals the sequence of implementing the management methodology, which makes it possible to assess its state in successive discrete states, to promptly take into account the effect of external influencing factors and make corrections, which, in turn, allow increasing the validity of strategic decisions aimed at increasing the effectiveness. A numerical calculation was performed, which allowed us to conclude that the value of maintenance intervals has a significant impact on the indicator of effectiveness. Adjusting this value makes it possible to optimize maintenance and repair processes.
PL
Kolejny rok zazwyczaj traktuje się jako nowe otwarcie i zawsze niesie to ze sobą mnóstwo pytań, nadziei i wątpliwości. Każda branża, każde przedsiębiorstwo i każdy z nas zastanawia się, podsumowuje miniony rok i zadaje sobie pytanie, czy będzie lepiej. Ostatnie dwa lata były dla przemysłu sporym wyzwaniem, co głównie wynikało z sytuacji epidemiologicznej na całym świecie. Wirus Covid-19 wprowadził sporo zamieszania na światowym rynku i przyczynił się do zachwiania gospodarki. Mimo to, wszystkie gałęzie przemysłu mocno walczyły o przetrwanie tego trudnego okresu. Jednym udało się to z większym sukcesem, a innym już nieco gorzej. Jednak wysiłki ponoszone przez poszczególne branże przyniosły jakiś efekt. Jak podaje Krajowa Izba Gospodarcza (KIG), biorąc pod uwagę wskaźniki makroekonomiczne widać, że mimo trudnej sytuacji rok 2021 okazał się lepszy niż pierwszy rok pandemii, czyli 2020. Dane o produkcji, eksporcie czy sprzedaży za ostatni okres są dość pomyślne, a PKB w 2021 r. – według szacunków KIG – wzrósł o 5,7%. Czy to oznacza sukces i radość przedsiębiorców? Niestety, nie do końca. Owszem, taka informacja jest pozytywna, należy jednak zachować racjonalne myślenie, bo pandemia nadal trwa, a wirus Covid-19 nie powiedział jeszcze ostatniego słowa. Przedsiębiorcy doskonale zdają sobie z tego sprawę.
PL
Praca kontynuuje cykl publikacji o szacowaniu metodą regresji liniowej parametrów równania i granic pasma niepewności linii prostej y = ax + b dopasowanej do wyników pomiarów obu współrzędnych punktów badanych. Rozpatrzono przypadek ogólny, gdy współrzędne te mają różne niepewności i występują wszystkie możliwe autokorelacje i korelacje wzajemne. Zastosowano opis równaniami macierzowymi. Wyniki pomiarów współrzędnych przedstawiono jako elementy wektorów w X i Y. Propagację niepewności opisano macierzą kowariancji UZ o czterech macierzach składowych, tj. UX i UY - dla niepewności i autokorelacji zmiennych X i Y oraz UXY i jej transpozycja UTXY - dla korelacji wzajemnych. Podano równanie linii prostej i granice jej pasma niepewności. Otrzymane je dla funkcji parametrów a i b spełniającej tzw. kryterium totalne WTLS, tj. minimum sumy kwadratów odległości punktów od prostej ważonych przez odwrotności niepewności współrzędnych. Przy nieskorelowaniu współrzędnych różnych punktów stosuje się uproszczone kryterium WLS. Kierunki rzutowania punktów wnikają z minimalizacji funkcji opisującej kryterium. W przypadku ogólnym istnieje tylko rozwiązanie numeryczne. Zilustrowano to przykładem. Parametry a i b linii prostej wyznaczono numerycznie z powiększonych fragmentów wykresu funkcji kryterialnej wokół jej minimum. Podano też warunki wymagane dla niepewności i korelacji współrzędnych punktów, które umożliwiają uzyskanie rozwiązania analitycznego i jego przykład.
EN
The work continues the series of publications on the estimation of the parameters of the equation and the limits of the uncertainty band of the straight-line y = ax + b fitted to the measurement results of both coordinates of the tested points with the use of the linear regression method. A general case was considered when these coordinates have different uncertainties and there are all possible autocorrelations and cross-correlations. Description of matrix equations was used. The results of the coordinate measurements are presented as elements of the X and Y vectors. The propagation of their uncertainty was described by the UZ covariance matrix with four component matrices, i.e., UX and UY - for the uncertainties and autocorrelations of X and of Y, and UXY and its transposition UTXY - for the cross-correlations. The equation of a straight line and of the borders of its uncertainty band are given. Obtained them for the function of parameters a and b satisfying the so-called total criterion WTLS, i.e., the minimum sum of squared distances of points from the straight line weighted by the reciprocal of the coordinate uncertainty. When the coordinates of different points are not correlated, the simplified criterion WLS is used. The directions of projecting the points result from the minimization of the function describing the criterion. In the general case, there is only a numerical solution. This is illustrated by an example, in which the parameters a and b of the straight line were determined numerically from the enlarged fragments of the graph of the criterion function around its minimum. The conditions for the uncertainty and correlation of coordinates of points required to obtain an analytical solution and its example are also given.
EN
Decision-making is a tedious and complex process. In the present competitive scenario, any incorrect decision may excessively harm an organization. Therefore, the parameters involved in the decision-making process should be looked into carefully as they may not always be of a deterministic nature. In the present study, a multiobjective nonlinear transportation problem is formulated, wherein the parameters involved in the objective functions are assumed to be fuzzy and both supply and demand are stochastic. Supply and demand are assumed to follow the exponential distribution. After converting the problem into an equivalent deterministic form, the multiobjective problem is solved using a neutrosophic compromise programming approach. A comparative study indicates that the proposed approach provides the best compromise solution, which is significantly better than the one obtained using the fuzzy programming approach.
EN
An adsorber in which sorption processes occur is one of the key components of an adsorption chiller. Precise real-time monitoring of and supervision over these processes are particularly important to ensure their proper execution. The article describes the experimental stand used for the measurement of the adsorber’s operating parameters and analyses pressure measurement uncertainties, taking into account the impact of the temperature on the test system filled with the adsorbent in the form of silica gel, while concurrently considering the influence of other factors (e.g. the environment, the A/A, and A/D conversion, or data processing) on measurement uncertainties. A complex analysis of uncertainties was carried out, including the results of the statistical analysis of the measurement data obtained from long-term experimental tests of the object and the uncertainties of the pressure measuring chain by the type B method, involving the consideration of interactions between the system components and the temperature impact on the propagation of uncertainties. As part of the analysis, the characteristic stages of the data collection and processing operations related to the sampling rate and measurement intervals were separated. The article presents the prototype test stand and original pressure measurement system for the verification of a single-bed adsorber working below 10 hPa.The novel construction of a single-bed adsorber was used as a test object. Furthermore, in this paper, the developed algorithm of the research method implemented in the system was discussed and positively verified.
EN
This work proposes a systematic assessment of stereophotogrammetry and noise-floor tests to characterize and quantify the uncertainty and accuracy of a vision-based tracking system. Two stereophotogrammetry sets with different configurations, i.e., some images are designed and their sensitivity is quantified based on several assessments. The first assessment evaluates the image coordinates, stereo angle and reconstruction errors resulting from the stereophotogrammetry procedure, and the second assessment expresses the uncertainty from the variance and bias errors measured from the noise-floor test. These two assessments quantify the uncertainty, while the accuracy of the vision-based tracking system is assessed from three quasi-static tests on a small-scaled specimen. The difference in each stereophotogrammetry set and configuration, as indicated by the stereophotogrammetry and noise-floor assessment, leads to a significant result hat the first stereophotogrammetry set measures the RMSE of 3.6 mm while the second set identifies only 1.6 mm of RMSE. The results of this work recommend a careful and systematic assessment of stereophotogrammetry and noise-floor test results to quantify the uncertainty before the real test to achieve a high displacement accuracy of the vision-based tracking system.
7
Content available Uogólniona metoda najmniejszych kwadratów
PL
Artykuł przedstawia uogólnione podejście dla dobrze znanej metody najmniejszych kwadratów stosowanej w praktyce metrologicznej. Wyznaczone niepewności punktów pomiarowych i korelacje między mierzonymi zmiennymi tworzą symetryczną macierz kowariancji, której odwrotność mnożona jest lewostronnie i prawostronnie przez wektory błędów obu zmiennych losowych i stanowi funkcję kryterialną celu. Aby uzyskać maksymalną wartość funkcji największej wiarygodności i rozwiązać złożony problemu minimalizacji funkcji kryterialnej, zaprezentowano oryginalny sposób wyznaczenia funkcji kryterialnej do postaci jednoargumentowej zależności obliczanej numerycznie, w której jedyną zmienną jest poszukiwany współczynnik kierunkowy prostej regresji. Artykuł zawiera podstawowe informacje o tego typu regresji liniowej, dla której najlepiej dopasowana prosta minimalizuje funkcję celu. Na przykładzie obliczeniowym pokazana jest pełna procedura dopasowania numerycznego prostej do danego zestawu punktów pomiarowych o zadanych niepewnościach i współczynnikach korelacji tworzących macierz kowariancji.
EN
The paper presents a generalized approach for the well-known least squares method used in metrological practice. In order to solve the complex problem of minimizing the objective function to obtain the maximum value of the likelihood function, the original way of determining this function in the form of a unary relationship calculated numerically was presented. The article presents borderline cases with analytical solutions. The computational example shows the full procedure of numerical adjustment of a straight line to a given set of measurement points with given uncertainties and correlation coefficients forming the covariance matrix.
PL
Badania prowadzone w akredytowanych laboratoriach na potrzeby rzeczoznawstwa budowlanego mogą obejmować zarówno badania na próbkach pobranych in situ, jak i badania realizowane in situ oraz badania z wykorzystaniem metod obliczeniowych. Laboratoria akredytowane są zobowiązane do stosowania standardów, które m.in. gwarantują określenie niepewności wyników badań, co umożliwia ocenę ryzyka odnoszącego się do ocen i ekspertyz stanu technicznego obiektów budowlanych.
EN
Tests carried out in accredited laboratories for construction appraisal may include tests on samples taken "in situ", as well as tests carried out "in situ" and tests with the use of computational methods. Accredited laboratories are required to adapt the standards, which also guarantee the determination of the uncertainty of test results, which enables risk assessment relating to the assessment and expertise of the technical condition of facilities.
EN
A generalized workflow of scientific process requires data to be obtained, reprocessed, integrated, optionally transformed, modelled and finally interpreted in order to understand the underlying process. This procedure is affected by both objective and subjective uncertainties. In parallel with the development of geostatistics, the role of uncertainty has been widely investigated in geosciences. This has led to the introduction of new concepts, taken for example from thermodynamics, such as entropy. Predicting the subsurface is an especially thankless effort, as data are driven from spatially highly limited direct sources. The following paper provides an review of various applications of the Shannon entropy theorem in geoscience. Information entropy, initially proposed by Shannon (1948) provides an objective measure of overall system uncertainty. Significant concern has been focused on the application of Shannon entropy to provide an objective measure of joint system uncertainty and visualization of its spatial distribution. The area of extensively drilled Eocene amber-bearing deposits located in the Lubelskie voivodeship was selected as a case study to investigate the quality of prediction stochastic lithofacies models. The importance of adding secondary variables to a stochastic model is also reviewed here. Adding new data and rerunning the simulation allows assessment of its impact on the predictability of a stochastic model. The most important conclusion from the study is that the deposition of amber-bearing lithofacies occurred mostly in the northern part of the area investigated, as shown also by ongoing exploitation of the deposit.
EN
Background: Socio-cyber-physical systems (SCPSs) are a type of cyber-physical systems with social concerns. Many SCPSs, such as smart homes, must be able to adapt to reach an optimal symbiosis with users and their contexts. The Systems Modeling Language (SysML) is frequently used to specify ordinary CPSs, whereas goal modeling is a requirements engineering approach used to describe and reason about social concerns. Objective: This paper aims to assess existing modeling techniques that support adaptation in SCPSs, and in particular those that integrate SysML with goal modeling. Method: A systematic literature review presents the main contributions of 52 English articles selected from five databases that use both SysML and goal models (17 techniques), SysML models only (11 techniques), or goal models only (8 techniques) for analysis and self-adaptation. Result: Existing techniques have provided increasingly better modeling support for adaptation in a SCPS context, but overall analysis support remains weak. The techniques that combine SysML and goal modeling offer interesting benefits by tracing goals to SysML (requirements) diagrams and influencing the generation of predefined adaptation strategies for expected contexts, but few target adaptation explicitly and most still suffer from a partial coverage of important goal modeling concepts and of traceability management issues.
EN
Ductile iron is a material that is very sensitive to the conditions of crystallization. Due to this fact, the data on the cast iron properties obtained in tests are significantly different and thus sets containing data from samples are contradictory, i.e. they contain inconsistent observations in which, for the same set of input data, the output values are significantly different. The aim of this work is to try to determine the possibility of building rule models in conditions of significant data uncertainty. The paper attempts to determine the impact of the presence of contradictory data in a data set on the results of process modeling with the use of rule-based methods. The study used the well-known dataset (Materials Algorithms Project Data Library, n.d.) pertaining to retained austenite volume fraction in austempered ductile cast iron. Two methods of rulebased modeling were used to model the volume of the retained austenite: the decision trees algorithm (DT) and the rough sets algorithm (RST). The paper demonstrates that the number of inconsistent observations depends on the adopted data discretization criteria. The influence of contradictory data on the generation of rules in both algorithms is considered, and the problems that can be generated by contradictory data used in rule modeling are indicated.
EN
The article discusses a new mathematical method for comparing the consistency of two particle size distribution curves. The proposed method was based on the concept of the distance between two graining curves. In order to investigate whether the distances between the particle size distribution curves are statistically significant, it was proposed to use the statistical test modulus-chi. As an example, the compliance of three sieve curves taken from the earth dam in Pieczyska on the Brda River in Poland was examined. In this way, it was established from which point of the dam the soil was washed away. However, it should be remembered that the size of the soil grains built into the dam does not have to be identical to the grain size of the washed out soil, because the fine fractions will be washed away first, while the larger ones may remain in the body of the earth structure.
PL
Problem porównywania krzywych uziarnienia powstał po wypłukaniu na dolne stanowisko niewielkiej ilości gruntu z korpusu Zapory w Pieczyskach w 2020 r. Ponieważ w 2016 r. również został wypłukany z dolnej cześci zapory grunt, zadano sobie pytanie, czy na podstawie znajomości krzywej uziarnienia można stwierdzić, ze wypłukany materiał pochodzi z tego samego miejsca. W celu zbadania zgodności krzywych uziarnienia zaproponowano metodę statystyczną opartą o koncepcję odległości pomiędzy tymi krzywymi. Za odległość pomiędzy krzywymi uziarnienia rozumiana jest suma wartości bezwzględnych procentowych zawartości masy przechodzącej przez dane sito podzielona przez odchylenie standardowe wyznaczenia zawartości procentowej (wzory (2.1)–(2.4)). Wzór (2.4) upraszcza się gdy procentowa zawartość masy jest obliczana z takim samym odchyleniem standardowym dla każdej krzywej (2.5). Odchylenie standardowe identyfikowane jest z niepewnością standardową wynikającą z pomiaru masy gruntu zebranego na sitach. Kolejne kroki obliczenia tej wartości podane są we wzorach (3.1)-(3.5). Wzór (3.6) to końcowy rezultat obliczenia niepewności standardowej. Zależy ona od pomiarowej niepewności masy oraz od samej masy zebranej na sicie. Tabela 1 oraz tabela 2 prezentują wartość niepewności obliczoną dla typowych mas pobieranych próbek gruntu (100-500 g) i procentowych zawartości gruntu na sicie, dla dokładności pomiarowej wagi 0,1 i 0,5 g. Następnie porównywano krzywe przesiewu próbek pobranych z wypłukanego gruntu w 2016 i 2020 r. oraz krzywej archiwalnej z 1974 r. reprezentującej typowy grunt korpusu zapory (rys. 1).
EN
Classically, local deterministic optimization techniques have been employed to solve such nonlinear gravity inversion problem. Nevertheless, local search methods can also be easily implemented and demonstrate higher rates of convergence; but in highly nonlinear cases such as geophysical problems, they require a reliable initial model which should be adequately close to the true model. Recently, global optimization methods have shown promising results as an alternative to classical inversion methods. Each of the global optimization algorithms has unique benefits and faults; therefore, applying different combinations of them is one of the proposed solutions for overcoming their distinct limitations. In this research, the design and implementation of the hybrid method based on a combination of the imperialist competitive algorithm (ICA) and firefly algorithm (FA) as tools of two-dimensional nonlinear modeling of gravity data and as a substitute for the local optimization methods were investigated. Hybrid of ICA and FA algorithm (known as ICAFA) is a modified form of the ICA algorithm based on the firefly algorithm. This modification results in an increase in the exploratory capability of the algorithm and improvement of its convergence rate. This inversion technique was first successfully tested on a synthetic gravity anomaly originated from a simulated sedimentary basin model both with and without the presence of white Gaussian noise (WGN). At last, the method was applied to the Bouguer anomaly from a real gravity profile in Moghan sedimentary basin (Iran). The results of this modeling were compatible with previously published works which consisted of both seismic analysis and other gravity interpretations. In order to estimate the uncertainty of solutions, several inversion runs were also conducted independently and the results were in line with the final solution.
EN
Sensitivity analysis of parameters is usually more important than the optimal solution when it comes to linear programming. Nevertheless, in the analysis of traditional sensitivities for a coefficient, a range of changes is found to maintain the optimal solution. These changes can be functional constraints in the coefficients, such as good values or technical coefficients, of the objective function. When real-world problems are highly inaccurate due to limited data and limited information, the method of grey systems is used to perform the needed optimisation. Several algorithms for solving grey linear programming have been developed to entertain involved inaccuracies in the model parameters; these methods are complex and require much computational time. In this paper, the sensitivity of a series of grey linear programming problems is analysed by using the definitions and operators of grey numbers. Also, uncertainties in parameters are preserved in the solutions obtained from the sensitivity analysis. To evaluate the efficiency and importance of the developed method, an applied numerical example is solved.
EN
In this work, authors investigated the effect of the Depth of Field (DoF) reduction, arising when the acquisition of small objects is carried out with a photogrammetry-based system using a Digital Single Lens Reflex (DSLR) camera and the structure from motion (SfM) algorithm. This kind of measuring instrument is very promising for industrial metrology according to the paradigms of the fourth industrial revolution. However, when increasing the magnification level, necessary for the reconstruction of sub-millimetric features, there is a corresponding decrease of the DoF, leading to possible effects on the reconstruction accuracy. Thus, the effect of the DoF reduction was analysed through the reconstruction of a well-known artefact: the step gauge. The analysis was conducted considering the theory behind the DoF concept, the analysis of the 2D images, input of photogrammetric reconstruction and, finally, the results in terms of dimensional verification of the reconstructed step gauge.
EN
Noise is a fundamental metrological characteristic of the instrument in surface topography measurement. Therefore, measurement noise should be thoroughly studied in practical measurement to understand instrument performance and optimize measurement strategy. This paper investigates the measurement noise at different measurement settings using structured illumination microscopy. The investigation shows that the measurement noise may scatter significantly among different measurement settings. Eliminating sample tilt, selecting low vertical scanning interval and high exposure time is helpful to reduce the measurement noise. In order to estimate the influence of noise on the measurement, an approach based on metrological characteristics is proposed. The paper provides a practical guide to understanding measurement noise in a wide range of applications.
EN
Reliable measurement uncertainty is a crucial part of the conformance/nonconformance decision-making process in the field of Quality Control in Manufacturing. The conventional GUM-method cannot be applied to CMM measurements primarily because of lack of an analytical relationship between the input quantities and the measurement. This paper presents calibration uncertainty analysis in commercial CMM-based Coordinate Metrology. For the case study, the hole-plate calibrated by the PTB is used as a workpiece. The paper focuses on thermo-mechanical errors which immediately affect the dimensional accuracy of manufactured parts of high-precision manufacturers. Our findings have highlighted some practical issues related to the importance of maintaining thermal equilibrium before the measurement. The authors have concluded that the thermal influence as an uncertainty contributor of CMM measurement result dominates the overall budgets for this example. The improved calibration uncertainty assessment technique considering thermal influence is described in detail for the use of a wide range of CMM users.
EN
The paper deals with the design approach of a subdefinite mechatronic system and focuses on the sizing stage of a gearbox of a wind turbine based on the interval computation method. Indeed, gearbox design variables are expressed by intervals to take into account the uncertainty in the estimation of these parameters. The application of the interval computation method allows minimizing the number of simulations and enables obtaining a set of solutions instead of a single one. The dynamic behavior of the gearbox is obtained using the finite element method. The challenge here is to get convergent results with intervals that reflect the efficiency of the applied method. Thus, several mathematical formulations have been tested in static study and evaluated in the case of a truss. Then the interval computation method was used to simulate the behavior of the wind turbine gearbox.
EN
The output of distributed generation (DG) has strong randomness, and its randomness has a great inuence on the division of islands. To simulate the impact of DG output on island division when dividing islands, this study proposed an island division method that considers the randomness of DG output. The basic idea of this method is as follows. First, Monte Carlo sampling was used to obtain the output power of DG under different confidence levels to simulate the randomness of DG output. Furthermore, a multi-objective and multi-constraint considering the randomness of DG output were established. The niche genetic algorithm was used to solve the model, and the effectiveness of the proposed model and algorithm was verified through the analysis of examples. The results show that the risk reserve power introduced by simulating the randomness of DG output is inversely proportional to the confidence level. The minimum value of the system node voltage level after islanding is 0.9495 pu, which meets the requirements of the constraint. Under the same conditions, compared with the island division method of not considering the random DG, the method proposed in this study not only has a larger total load recovery and a higher priority load recovery rate but also has a higher DG utilization rate, which can meet the needs of practical applications. This study provides a certain reference for the establishment and solution method of the islanding model of the distribution network with DG.
EN
In this paper, the probabilistic behavior of plain concrete beams subjected to flexure is studied using a continuous mesoscale model. The model is two-dimensional where aggregate and mortar are treated as separate constituents having their own characteristic properties. The aggregate is represented as ellipses and generated under prescribed grading curves. Ellipses are randomly placed so it requires probabilistic analysis for model using the Monte Carlo simulation with 20 realizations to represent geometry uncertainty. The nonlinear behavior is simulated with an isotropic damage model for the mortar, while the aggregate is assumed to be elastic. The isotropic damage model softening behavior is defined in terms of fracture mechanics parameters. This damage model is compared with the fixed crack model in macroscale study before using it in the mesoscale model. Then, it is used in the mesoscale model to simulate flexure test and compared to experimental data and shows a good agreement. The probabilistic behavior of the model response is presented through the standard deviation, moment parameters and cumulative probability density functions in different loading stages. It shows variation of the probabilistic characteristics between pre-peak and post-peak behaviour of load-CMOD curves.
first rewind previous Strona / 19 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.