Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
  • Sesja wygasła!

Znaleziono wyników: 9

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  modelowanie danych
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
The diagnosis of systems is one of the major steps in their control and its purpose is to determine the possible presence of dysfunctions, which affect the sensors and actuators associated with a system but also the internal components of the system itself. On the one hand, the diagnosis must therefore focus on the detection of a dysfunction and, on the other hand, on the physical localization of the dysfunction by specifying the component in a faulty situation, and then on its temporal localization. In this contribution, the emphasis is on the use of software redundancy applied to the detection of anomalies within the measurements collected in the system. The systems considered here are characterized by non-linear behaviours whose model is not known a priori. The proposed strategy therefore focuses on processing the data acquired on the system for which it is assumed that a healthy operating regime is known. Diagnostic procedures usually use this data corresponding to good operating regimes by comparing them with new situations that may contain faults. Our approach is fundamentally different in that the good functioning data allow us, by means of a non-linear prediction technique, to generate a lot of data that reflect all the faults under different excitation situations of the system. The database thus created characterizes the dysfunctions and then serves as a reference to be compared with real situations. This comparison, which then makes it possible to recognize the faulty situation, is based on a technique for evaluating the main angle between subspaces of system dysfunction situations. An important point of the discussion concerns the robustness and sensitivity of fault indicators. In particular, it is shown how, by non-linear combinations, it is possible to increase the size of these indicators in such a way as to facilitate the location of faults.
EN
Nuclear power plant process systems have developed great lyover the years. As a large amount of data is generated from Distributed Control Systems (DCS) with fast computational speed and large storage facilities, smart systems have taken over analysis of the process. These systems are built using data mining concepts to understand the various stable operating regimes of the processes, identify key performance factors, makes estimates and suggest operators to optimize the process. Association rule mining is a frequently used data-mining conceptin e-commerce for suggesting closely related and frequently bought products to customers. It also has a very wide application in industries such as bioinformatics, nuclear sciences, trading and marketing. This paper deals with application of these techniques for identification and estimation of key performance variables of a lubrication system designed for a 2.7 MW centrifugal pump used for reactor cooling in a typical 500MWe nuclear power plant. This paper dwells in detail on predictive model building using three models based on association rules for steady state estimation of key performance indicators (KPIs) of the process. The paper also dwells on evaluation of prediction models with various metrics and selection of best model.
EN
Recognition of subsoil in areas threatened with discontinuous deformation associated with the existence of natural and mining voids can be implemented by various geophysical methods. The purpose of such research, apart from confirming the existence of voids, is to determine their spatial extent. This is not a simple issue, regardless of the geophysical method used. This paper discusses the possibilities of geometrization of karst phenomenon localization using the ground penetrating radar (GPR) method by the example of a karst cave as a natural void. The area of data acquisition is located on limestone formations with numerous karstforms. The study object is the main hall of the karst cave with a height of up to 3 m, located at a depth of 3 to 7 m below the surface. Such location and shape of the subsurface structure made it possible for the author to perform a wide range of research. Their original aspects are presented in this paper. The shape of the hall was obtained using terrestrial laser scanning (TLS). The GPR data were obtained employing the 250 MHz shielded antenna that was directly positioned using a robotized total station with the option of automatic target tracking. Thus, the GPR and geodetic data were immediately achieved in a uniform coordinate system. The accuracy of the data obtained in this way is discussed in this paper. The author’s original algorithm for processing of GPR data into a point cloud is presented. Based on the results obtained, it was possible to compare the GPR signal, which represents the shape of the cave hall, in relation to its image in the form of a point cloud from terrestrial laser scanning. A unique part of this paper is the selection of filtration procedures and their parameters in optimal GPR data processing, which were widely discussed and documented in a way beyond the standard filtration procedures. A significant contribution is the analysis that was carried out on the data obtained in the field and on the model data generated using the finite difference method. Modeling was carried out for two wave sources: exploding reflector and point. The presented methodology and discrimination between the actual shape of the cave, GPR field data and model data made it possible for the author to draw many conclusions related to the possibilities of shape geometrization of the subsurface voids determined by the GPR method.
EN
The chemometric methods of data analysis allow to resolve complex multi-component systems by decomposing a measured signal into the contributions of pure substances. Mathematical procedure of such decomposition is called empirical data modelling. The main aim and subject of this article is to provide some basic information on the chemometric analysis. The chemometric techniques are divided into three categories, resulting from the assumed premises. A base of hard type of modelling is an assumption, that the measured dataset can be a priori described by generally accepted physical or chemical laws, expressed by analytical forms of mathematical functions, however with unknown values of parameters [1]. Numerical values of those constants are optimised by using procedures such as the least squares curve fitting [1, 2]. When explicit form of equations are found, the whole system of data can be resolved. Therefore, the white types of data modelling are often used for kinetic measurements [3–8] and analysis of fluorescence quenching [9–13]. Completely different approach to data modelling is offered by so called soft chemometric methods [14–20]. Those techniques do not require any presumptions; solutions obtained for the considered system are thus far much more unconstrained. The black types of analysis make use not only of the least squares fitting procedures [18, 19], but also some other geometrical optimisation algorithms [16, 17]. The results of that approach suffer however from one main drawback: the final outcome is not unique – system is described by a set of feasible solutions. As a consequence, soft data modelling is generally applied to resolve empirical data, which cannot be easily expressed by an explicit form of a function. Such measurement techniques are for example chromatography or volumetry. However, if some conjecture could be made about the measurement system and the obtained data, it is possible to stiffen the black methods by applying white constraints [20]. These types of the chemometric analysis are called grey or hard-soft, and are a practical combination of model-free optimisation with the limitations of feasible solutions, resulting from conformity with physical or chemical laws. Due to the fact, that data modelling provides an opportunity for simultaneous identification of several components of the analysed mixture, the chemometric procedures, although not so popular yet, are extremely powerful research tools.
EN
Proposed method, called Probabilistic Features Combination (PFC), is the method of multi-dimensional data modeling, extrapolation and interpolation using the set of high-dimensional feature vectors. This method is a hybridization of numerical methods and probabilistic methods. Identification of faces or fingerprints need modeling and each model of the pattern is built by a choice of multi-dimensional probability distribution function and feature combination. PFC modeling via nodes combination and parameter γ as N-dimensional probability distribution function enables data parameterization and interpolation for feature vectors. Multidimensional data is modeled and interpolated via nodes combination and different functions as probability distribution functions for each feature treated as random variable: polynomial, sine, cosine, tangent, cotangent, logarithm, exponent, arc sin, arc cos, arc tan, arc cot or power function.
PL
Autorska metoda Probabilistycznej Kombinacji Cech - Probabilistic Features Combination (PFC) jest wykorzystywana do interpolacji i modelowania wielowymiarowych danych. Węzły traktowane są jako punkty charakterystyczne N-wymiarowej informacji, która ma być odtwarzana (np. obraz). Wielowymiarowe dane są interpolowane lub rekonstruowane z wykorzystaniem funkcji rozkładu prawdopodobieństwa: potęgowych, wielomianowych, wykładniczych, logarytmicznych, trygonometrycznych, cyklometrycznych.
PL
W artykule zaprezentowano rozwój poszczególnych notacji modelowania danych w odniesieniu do modelu relacyjnego oraz została przedstawiona analiza porównawcza wybranych notacji modelowania. Wykazano również, które ze standardów modelowania najczęściej są implementowane w najbardziej popularnych narzędziach CASE do modelowania danych. Artykuł ten w pewien sposób systematyzuje wiedzę na temat istniejących notacji wykorzystywanych w modelowaniu danych za pomocą diagramów ERD i UML.
EN
The article presents the development of the individual data modeling notation with reference to the relational model. Some comparative analysis of selected modeling notation has been shown. It turns out that the most common modeling standards are implemented in the most popular CASE tools for data modeling. The author tries to systematise existing knowledge about the notation which is used in modeling of the data by means of the ERD and UML diagrams.
PL
W pracy omówiono zagadnienie modelowania danych na poziomie konceptualnym, które ma kluczowe znaczenie dla użyteczności i jakości projektowanej bazy danych. Przykład modelowania danych na poziomie konceptualnym jest ilustracją problemów, jakie występują w procesie technologicznym przy produkcji opakowań szklanych. Przedstawiony model bazy danych wykorzystany zostanie przy budowie Inteligentnego Systemu Wspomagania Decyzji, opracowywanego dla potrzeb przedsiębiorstwa produkcyjnego w przemyśle szklarskim. System ten będzie służył do klasyfikacji wad produktu (opakowań szklanych) oraz doboru odpowiedniej metody eliminacji wad powstających w trakcie procesu produkcji.
EN
The paper discusses the problem of data modeling on the conceptual level, which is crucial to the usefulness and quality of the proposed database. The example of data modeling is a conceptual illustration of the problems that occur in the technological process in the production of glass packaging. The database model will be used in the construction of an Intelligent Decision Support System developed for the needs of manufacturing companies in the glass industry. This system will be used for classification of product defects (glass containers), as well as choosing the appropriate method of elimination of defects generated during the manufacturing process.
PL
Systemy empiryczne, leżące w obszarze zainteresowania inżynierii rolniczej, charakteryzują się wyjątkową złożonością. Próbą radzenia sobie z sygnalizowaną komplikacją w procesie poznawania jest wielopoziomowe modelowanie dziedziny problemowej. Istotną kwestią w procesie modelowania dziedziny problemowej, jak i projektowania systemów informatycznych, które są wykorzystywane do badania systemów empirycznych jest prawidłowe zrealizowanie fazy modelowania danych. Zaistnienie tego etapu jest konsekwencją podejmowanych wysiłków zmierzających do poznania coraz bardziej złożonych systemów empirycznych, opisywanych coraz większą porcją informacji wzajemnie ze sobą powiązanych.
EN
Empirical systems investigated within agricultural engineering are extremely complex. A multi-level modeling of a problem domain, extended by mapping developed operational structures onto information systems is a solution to deal with the complexity. A crucial issue, becoming more and more pronounced in modeling a problem domain and designing information systems oriented at investigation of empirical systems, is to complete correctly data modeling phase. Such analysis is a result of substantial attempts made to better understand the empirical systems, the systems which are more and more complex, requiring more and more interrelated information. The attempts have been favored by appearance of new information technologies dedicated to representation of data.
9
Content available remote Informational systems designing and implementation
EN
Purpose: Problems which are likely manifest themselves in the course of implementation cannot be envisaged at the design stage of an informational system without employing the approach presented in this paper. Design/methodology/approach: All the procedures as well as all the processes need to be located through application of both analysis and synthesis; a through dismembering to the most plain constituent element to be made followed by reintegration the same into a whole of mutually connected elements. The quest of a method providing for optimum results is solved theoretically as well as practically, given that any problem which may appear in the course of implementation of the system had been resolved through analytic approach at the very start of the design stage. Such approach provides for the only viable manner for an applied informational system to be optimally utilized at minimum costs. Findings: This has been reconfirmed and proven by research monitoring and analysis of a number of informational systems applied in various organizations from shipyards, oil/petrochemical plants, steel mills etc., and way further to nonindustrial organizations, such as hospitals. Research limitations/implications: Analytical/synthetical approach led to a conclusion that all stages of design and development of an informational system are of equal importance, so each and everyone of them must be given equally careful consideration. Practical implications: By employment of data flow model all complex processes have been solved, all basic elements having been featured on the model with the logically required mutual connections. Linkage and basic elements flow sequence represents the basic for successful problems resolve in any informational system. Originality/value: This scientific, engineering approach to problems solving in the system operational use provides for applicability in most various cases, such as where design, development, implementation of an informational system represents the very basic for a sound and profitable business operation of an organization. Besides; the approach provides for wide flexibility and range in problems solving, whereby the same can be predicted, located and interconnected.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.