The diagnosis of systems is one of the major steps in their control and its purpose is to determine the possible presence of dysfunctions, which affect the sensors and actuators associated with a system but also the internal components of the system itself. On the one hand, the diagnosis must therefore focus on the detection of a dysfunction and, on the other hand, on the physical localization of the dysfunction by specifying the component in a faulty situation, and then on its temporal localization. In this contribution, the emphasis is on the use of software redundancy applied to the detection of anomalies within the measurements collected in the system. The systems considered here are characterized by non-linear behaviours whose model is not known a priori. The proposed strategy therefore focuses on processing the data acquired on the system for which it is assumed that a healthy operating regime is known. Diagnostic procedures usually use this data corresponding to good operating regimes by comparing them with new situations that may contain faults. Our approach is fundamentally different in that the good functioning data allow us, by means of a non-linear prediction technique, to generate a lot of data that reflect all the faults under different excitation situations of the system. The database thus created characterizes the dysfunctions and then serves as a reference to be compared with real situations. This comparison, which then makes it possible to recognize the faulty situation, is based on a technique for evaluating the main angle between subspaces of system dysfunction situations. An important point of the discussion concerns the robustness and sensitivity of fault indicators. In particular, it is shown how, by non-linear combinations, it is possible to increase the size of these indicators in such a way as to facilitate the location of faults.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Nuclear power plant process systems have developed great lyover the years. As a large amount of data is generated from Distributed Control Systems (DCS) with fast computational speed and large storage facilities, smart systems have taken over analysis of the process. These systems are built using data mining concepts to understand the various stable operating regimes of the processes, identify key performance factors, makes estimates and suggest operators to optimize the process. Association rule mining is a frequently used data-mining conceptin e-commerce for suggesting closely related and frequently bought products to customers. It also has a very wide application in industries such as bioinformatics, nuclear sciences, trading and marketing. This paper deals with application of these techniques for identification and estimation of key performance variables of a lubrication system designed for a 2.7 MW centrifugal pump used for reactor cooling in a typical 500MWe nuclear power plant. This paper dwells in detail on predictive model building using three models based on association rules for steady state estimation of key performance indicators (KPIs) of the process. The paper also dwells on evaluation of prediction models with various metrics and selection of best model.
The chemometric methods of data analysis allow to resolve complex multi-component systems by decomposing a measured signal into the contributions of pure substances. Mathematical procedure of such decomposition is called empirical data modelling. The main aim and subject of this article is to provide some basic information on the chemometric analysis. The chemometric techniques are divided into three categories, resulting from the assumed premises. A base of hard type of modelling is an assumption, that the measured dataset can be a priori described by generally accepted physical or chemical laws, expressed by analytical forms of mathematical functions, however with unknown values of parameters [1]. Numerical values of those constants are optimised by using procedures such as the least squares curve fitting [1, 2]. When explicit form of equations are found, the whole system of data can be resolved. Therefore, the white types of data modelling are often used for kinetic measurements [3–8] and analysis of fluorescence quenching [9–13]. Completely different approach to data modelling is offered by so called soft chemometric methods [14–20]. Those techniques do not require any presumptions; solutions obtained for the considered system are thus far much more unconstrained. The black types of analysis make use not only of the least squares fitting procedures [18, 19], but also some other geometrical optimisation algorithms [16, 17]. The results of that approach suffer however from one main drawback: the final outcome is not unique – system is described by a set of feasible solutions. As a consequence, soft data modelling is generally applied to resolve empirical data, which cannot be easily expressed by an explicit form of a function. Such measurement techniques are for example chromatography or volumetry. However, if some conjecture could be made about the measurement system and the obtained data, it is possible to stiffen the black methods by applying white constraints [20]. These types of the chemometric analysis are called grey or hard-soft, and are a practical combination of model-free optimisation with the limitations of feasible solutions, resulting from conformity with physical or chemical laws. Due to the fact, that data modelling provides an opportunity for simultaneous identification of several components of the analysed mixture, the chemometric procedures, although not so popular yet, are extremely powerful research tools.
W artykule zaprezentowano rozwój poszczególnych notacji modelowania danych w odniesieniu do modelu relacyjnego oraz została przedstawiona analiza porównawcza wybranych notacji modelowania. Wykazano również, które ze standardów modelowania najczęściej są implementowane w najbardziej popularnych narzędziach CASE do modelowania danych. Artykuł ten w pewien sposób systematyzuje wiedzę na temat istniejących notacji wykorzystywanych w modelowaniu danych za pomocą diagramów ERD i UML.
EN
The article presents the development of the individual data modeling notation with reference to the relational model. Some comparative analysis of selected modeling notation has been shown. It turns out that the most common modeling standards are implemented in the most popular CASE tools for data modeling. The author tries to systematise existing knowledge about the notation which is used in modeling of the data by means of the ERD and UML diagrams.
5
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Purpose: Problems which are likely manifest themselves in the course of implementation cannot be envisaged at the design stage of an informational system without employing the approach presented in this paper. Design/methodology/approach: All the procedures as well as all the processes need to be located through application of both analysis and synthesis; a through dismembering to the most plain constituent element to be made followed by reintegration the same into a whole of mutually connected elements. The quest of a method providing for optimum results is solved theoretically as well as practically, given that any problem which may appear in the course of implementation of the system had been resolved through analytic approach at the very start of the design stage. Such approach provides for the only viable manner for an applied informational system to be optimally utilized at minimum costs. Findings: This has been reconfirmed and proven by research monitoring and analysis of a number of informational systems applied in various organizations from shipyards, oil/petrochemical plants, steel mills etc., and way further to nonindustrial organizations, such as hospitals. Research limitations/implications: Analytical/synthetical approach led to a conclusion that all stages of design and development of an informational system are of equal importance, so each and everyone of them must be given equally careful consideration. Practical implications: By employment of data flow model all complex processes have been solved, all basic elements having been featured on the model with the logically required mutual connections. Linkage and basic elements flow sequence represents the basic for successful problems resolve in any informational system. Originality/value: This scientific, engineering approach to problems solving in the system operational use provides for applicability in most various cases, such as where design, development, implementation of an informational system represents the very basic for a sound and profitable business operation of an organization. Besides; the approach provides for wide flexibility and range in problems solving, whereby the same can be predicted, located and interconnected.
This paper describes an ongoing effort at NC3A to provide one integrated database which contains data from a number of different sources. Initially, these sources are legacy NATO systems. Later, other systems, including messaging interfaces of a wide variety, and national systems, will be added. A common data model is used as the lingua franca between systems. A COTS product has been identified that creates translator boxes to provide interfaces to and from the legacy systems.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.