Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 37

Liczba wyników na stronie
first rewind previous Strona / 2 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  pozyskiwanie danych
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 2 next fast forward last
EN
The article presents an experimental stand to assess the state of punch in the process of sheet blanking. Blanking trials were carried out on an eccentric press. During all the trials, there were recorded signals of acoustic emission (AE) that accompanied the process of blanking. For the recorded AE signals, the methodology of data preparation and analysis was presented. On that basis, the results of the assessment of the state of the punch were presented, and they employed five methods of visualization: Andrews curves, Principal Components Analysis, Linear Discriminant Analysis, a modified method of Stochastic Neighbor Embedding and Sammon Mapping. The aim of the work was to assess the possibility of using visualization methods to predict the condition of the tool on the basis of acoustic emission signals in processes carried out in extremely short times.
3
Content available Monitorology – the Art of Observing the World
EN
We focus on the art of observing the world by electronic devices such as sensors and meters that, in general, we call monitors. We also define main monitoring objectives and pose five challenges for effective and efficient monitoring that still need a lot of research. In the era where compute power like electricity is easily available and easy to use across the globe, and big data is generated in enormous amounts at ever-increasing rates, the question, what to monitor and how, will become ever more relevant to save the world from flood of meaningless, dumb data, leading frequently to false conclusions and wrong decisions whose impact may range from a minor inconvenience to loss of lives and major disasters.
EN
A significant development of the foundry industry contributes to the creation of high reliability and operational strength castings so that they meet specific standards in accordance with customers’ needs. This technology, however, is inseparably connected with casting defects in finished products. Cast products are subject to various defects which are considered acceptable or not, which is conditioned by the alloy chemical composition and strength characteristics, that is, generally – qualities to be agreed between the foundry and the customer. It is the latter that led the authors to research on designing a tool enabling the most reliable possible assessment of the emerging casting defects, which after proper consultations can be repaired and the casting – sold. The paper presents an original tool named the Open Atlas of Defects (OAD), developed for the last few years to support the evaluation of cast iron defects using Non-Destructive Testing (NDT) casting defects analysis tools (DCC card – Demerit Control Chart, Pareto-Lorenz analysis and ABC analysis). The OAD tool structure was presented as an integral part of the original system module for acquisition and data mining (A&DM) in conjunction with the possibilities of using selected tools for defect analysis support on the example of cast iron casting.
EN
The paper indicates the significance of the problem of foundry processes parameters stability supervision and assessment. The parameters, which can be effectively tracked and analysed using dedicated computer systems for data acquisition and exploration (Acquisition and Data Mining systems, A&D systems) were pointed out. The state of research and methods of solving production problems with the help of computational intelligence systems (Computational Intelligence, CI) were characterised. The research part shows capabilities of an original A&DM system in the aspect of selected analyses of recorded data for cast defects (effect) forecast on the example of a chosen iron foundry. Implementation tests and analyses were performed based on selected assortments for grey and nodular cast iron grades (castings with 50 kg maximum weight, casting on automatic moulding lines for disposable green sand moulds). Validation tests results, applied methods and algorithms (the original system’s operation in real production conditions) confirmed the effectiveness of the assumptions and application of the methods described. Usability, as well as benefits of using A&DM systems in foundries are measurable and lead to stabilisation of production conditions in particular sections included in the area of use of these systems, and as a result to improvement of casting quality and reduction of defect number.
EN
This paper contains a study of using Ogimet services as a source of meteorological data and the Python language script to streamline data processing. Meteorological data is important in large number of research projects in different disciplines of sciences and technology. In this case, it was used to analyze cloudiness, but it can also be used for energy, hydrology, and environment analyses. Attention has been paid to the total cloudiness variability in an area of the Lower Silesia region in Poland during the time period of 2001–2010 using the data from eight synoptic stations (the data was obtained from the Ogimet service). A very important part of the work constituted Ogimet services as a source of free and easily available meteorological data. The biggest advantage of Ogimet is that the process of obtaining data is very easy and helpful in reducing the time needed to collect the data necessary in the research process. The offered data is free and available via the Internet, but it is raw and general. For these reasons, a Python script language application was made for faster and easier data processing. The script applied in this project has been described in detail in the work. Finally, after processing the data, the daily averages of total cloudiness have been calculated based on the available data for eight meteorological stations. Next, the ten-year average for each day and month have been calculated. The results of the study were compared with works that took a longer data time period of total cloudiness into account.
PL
Artykuł poświęcony jest wykorzystaniu usługi Ogimet jako źródła danych meteorologicznych opisujących zachmurzenie oraz skryptu w języku Python do optymalizacji procesu przetwarzania pozyskanych danych. Dane meteorologiczne są istotne w wielu zagadnieniach badawczych z różnych dyscyplin nauki i techniki. W tym przypadku dane zostały wykorzystane do analizy wielkości zachmurzenia. Z równym powodzeniem opisane narzędzia mogą być wykorzystane w innych dziadzinach, takich jak hydrologia, ochrona środowiska czy energetyka. Zasadniczym elementem pracy jest opis usługi Ogimet jako źródła wolnych i łatwo dostępnych danych meteorologicznych. Największą zaletą serwisu jest prostote i szybkie pozyskiwanie danych. Oferowane dane są bezpłatne i dostępne przez Internet, ale są one surowe i ogólne. Z tego powodu zaproponowano użycie języka skryptowego Python do przetwarzania danych. Skrypt zastosowany w tym projekcie został szczegółowo opisany w pracy. Po przetworzeniu danych, na podstawie dostępnych informacji z ośmiu stacji meteorologicznych, obliczono wartości średnich dobowych całkowitego zachmurzenia. Następnie obliczono średnie dziesięcioletnie dla każdej ze stacji. Wyniki zostały porównane danymi zawartymi w pracach, w których analizowano zachmurzenie w dłuższym okresie.
7
Content available remote 30 km w półtora miesiąca
8
EN
Using microcontroller systems becomes a routine in various measurement and control tasks. Their wide availability together with a huge potential of extending their functionality by additional modules allows developing advanced measuring and monitoring systems by non-specialists. However, using popular example codes often leads the user to pass over or not to be aware of the limitations of the system and drawing too farreaching conclusions on the basis of incorrectly performed measurements This paper deals with the problem of choosing the right method for performing measurements using an acquisition system based on the budget Arduino UNO solution. The main assumption was to use the standard, widely available Arduino libraries. The work focuses on the scenario when data should be subject to time and frequency analysis in the later processing. The operating limits of the device were also determined depending on the data transmission method used.
PL
Pomiary wielkości fizycznych z wykorzystaniem układów opartych na mikrokontrolerach stają się standardem. Ich szeroka dostępność wraz z modułami rozszerzającymi ich funkcjonalność daje możliwość budowy zaawansowanych układów pomiarowych i monitorujących przez osoby nie będące specjalistami. Szereg dostępnych przykładów umożliwia szybką budowę systemu pomiarowego. Niejednokrotnie jednak powoduje, iż użytkownik jak i konstruktor nie zdają sobie sprawy z ograniczeń układu i na podstawie pomiarów wyciągają zbyt daleko idące wnioski. Niniejsza praca dotyka problematyki właściwej metody pozyskiwania danych pomiarowych. Na przykładach popularnie wykorzystywanych podejść do akwizycji danych, zobrazowano nie widoczne w pierwszym momencie skutki. W pracy skoncentrowano się na sytuacji, gdy w późniejszej obróbce dane mają podlegać analizom czasowym lub częstotliwościowym. Całość poparto przykładami bazując na układzie Arduino UNO. Założeniem autorów było wykorzystanie standardowo dostępnych bibliotek.
EN
This review paper presents a shortcoming associated to data mining algorithm(s) classification, clustering, association and regression which are highly used as a tool in different research communities. Data mining researches has successfully handling large amounts of dataset to solve the problems. An increase in data sizes was brought a bottleneck on algorithms to retrieve hidden knowledge from a large volume of datasets. On the other hand, data mining algorithm(s) has been unable to analysis the same rate of growth. Data mining algorithm(s) must be efficient and visual architecture in order to effectively extract information from huge amounts of data in many data repositories or in dynamic data streams. Data visualization researchers believe in the importance of giving users an overview and insight into the data distributions. The combination of the graphical interface is permit to navigate through the complexity of statistical and data mining techniques to create powerful models. Therefore, there is an increasing need to understand the bottlenecks associated with the data mining algorithms in modern architectures and research community. This review paper basically to guide and help the researchers specifically to identify the shortcoming of data mining techniques with domain area in solving a certain problems they will explore. It also shows the research areas particularly a multimedia (where data can be sequential, audio signal, video signal, spatio-temporal, temporal, time series etc) in which data mining algorithms not yet used.
10
Content available remote Towards big data solutions for industrial tomography data processing
EN
This paper presents an overview of what Big Data can bring to the modern industry. Through following the history of contemporary Big Data frameworks the authors observe that the tools available have reached sufficient maturity so as to be usable in an industrial setting. The authors propose the concept of a system for collecting, organising, processing and analysing experimental data obtained from measurements using process tomography. Process tomography is used for noninvasive flow monitoring and data acquisition. The measurement data are collected, stored and processed to identify process regimes and process threats. Further general examples of solutions that aim to take advantage of the existence of such tools are presented as proof of viability of such approach. As the first step in the process of creating the proposed system, a scalable, distributed, containerisation-based cluster has been constructed, with consumer-grade hardware.
EN
Due to advances in machine learning techniques and sensor technology, the data driven perspective is nowadays the preferred approach for improving the quality of maintenance for machines and processes in industrial environments. Our study reviews existing maintenance works by highlighting the main challenges and benefits and consequently, it shares recommendations and good practices for the appropriate usage of data analysis tools and techniques. Moreover, we argue that in any industrial setup the quality of maintenance improves when the applied data driven techniques and technologies: (i) have economical justifications; and (ii) take into consideration the conformity with the industry standards. In order to classify the existing maintenance strategies, we explore the entire data driven model development life cycle: data acquisition and analysis, data modeling, data fusion and model evaluation. Based on the surveyed literature we introduce taxonomies that cover relevant predictive models and their corresponding data driven maintenance techniques.
12
PL
Celem artykułu jest przedstawienie autorskiego projektu wprowadzenia algorytmu opartego o sieci neuronowe w zastosowaniu do pomiarów wykonywanych w transporcie. Jakość, ilość oraz sposób pozyskiwania danych bezpośrednio przekłada się na wyniki tworzonych modeli symulacyjnych. Przeanalizowano różne systemy (zarówno komercyjne, jak i autorskie), które są używane do pozyskania danych do modelowania. W wyniku różnych wątpliwości, niedostosowania systemów lub zbyt wysokich kosztów, zaproponowano alternatywne rozwiązania, które mogą wyeliminować prezentowane problemy. Zaproponowano rozwiązania ograniczające część problemów sygnalizowanych przez autorów w przedmiotowym zakresie. Testowe prace uzasadniły wykorzystanie sieci neuronowych w pomiarach w transporcie. Otrzymano wyniki pomiarów testowych o dostatecznej zgodności z rzeczywistymi obserwacjami oraz porównano je z wynikami systemów dostępnych na rynku. Autorzy poddają analizie dalsze wymagane prace oraz możliwości udoskonalenia stosowanych rozwiązań.
EN
The aim of the article is to present the original project of introducing an algorithm based on neural networks in application to measurements performed in transport. The quality, quantity and method of obtaining data directly translate into the results of the simulation models created. Various systems (both commercial and proprietary) have been analyzed, which are used to obtain data for modeling. As a result of various doubts, system maladjustments or excessive costs, alternative solutions have been proposed that can eliminate the presented problems. As part of its work, solutions have been proposed that limit some of the problems reported by the authors in this regard. Test work justified the use of neural networks in measurements in transport. Test results with sufficient compliance with real observations were obtained and compared to the results of systems available on the market. The authors also analyze further required work and the possibilities of improving the solutions used.
PL
W opracowaniu zaprezentowano autorską metodykę wyznaczania geometrii oraz regulacji ustawienia kół jezdnych mostu suwnicy. Przedstawiono innowacyjne przyrządy pomiarowe, skonstruowane w dostosowaniu do specyficznych warunków pomiarów eksploatacyjnych kół konstrukcji mostu, zgodnie z wymogami instrukcji pomiarowych i normy PN-89/M-45453 [1]. Technologia ta była kilkakrotnie stosowana przez autorów w praktyce geodezyjnej podczas pozyskiwania danych pomiarowych w dużych zakładach przemysłowych w trakcie pomiarów eksploatacyjnych wielkowymiarowych suwnic mostowych (Nuclear Power Components - Vitkowice - Czechy, Huta „KATOWICE”, Huta im. Sędzimira w Nowej Hucie, Huta „Zabrze”).
EN
The paper presents the author’s methodology for determining geometry and adjusting the settings of the crane bridge wheels. Innovative measuring instruments adapted to the specific wear and tear conditions of the bridge structure wheels were prenited in accordance with the requirements of the industry instruments and PN-89/M-4553 standard [1]. This technology has been applied several times by the authors in geodesy practice to acquire measurement data in large industrial plants during operational measurements of large-dimension bridge cranes (Nuclear Power Components - Vitkowice - Czech Republik, Huta „KATOWICE”, Huta im. Sędzimira – Nowa Huta, Huta „Zabrze”).
EN
In this paper, five contemporary scalable systems to support medical research teams are presented. Their functionalities extend from heterogeneous unstructured data acquisition through large-scale data storing, to on-the-fly analyzing by using robust methods. Such kinds of systems can be useful in the development of new medical procedures and recommendation rules for decision support systems. A short description of each of them is provided. Further, a set of the most important features is selected, and a comparison based-on it is performed. The need for high performance computing is emphasized. A general discussion how to improve the existing solutions or develop new ones in the future is also presented.
EN
The paper undertakes an important topic of evaluation of effectiveness of SCADA (Supervisory Control and Data Acquisition) systems, used for monitoring and control of selected processing parameters of classic green sands used in foundry. Main focus was put on process studies of properties of so-called 1st generation molding sands in the respect of their preparation process. Possible methods of control of this processing are presented, with consideration of application of fresh raw materials, return sand (regenerate) and water. The studies conducted in one of European foundries were aimed at pointing out how much application of new, automated plant of sand processing incorporating the SCADA systems allows stabilizing results of measurement of selected sand parameters after its mixing. The studies concerned two comparative periods of time, before an implementation of the automated devices for green sands processing (ASMS - Automatic Sand Measurement System and MCM – Main Control Module) and after the implementation. Results of measurement of selected sand properties after implementation of the ASMS were also evaluated and compared with testing studies conducted periodically in laboratory.
EN
Industrial Control Systems (ICS) are commonly used in industries such as oil and natural gas, transportation, electric, water and wastewater, chemical, pharmaceutical, pulp and paper, food and beverage, as well as discrete manufacturing (e.g., automotive, aerospace, and durable goods.) SCADA systems are generally used to control dispersed assets using centralized data acquisition and supervisory control. Originally, ICS implementations were susceptible primarily to local threats because most of their components were located in physically secure areas (i.e., ICS components were not connected to IT networks or systems). The trend toward integrating ICS systems with IT networks (e.g., efficiency and the Internet of Things) provides significantly less isolation for ICS from the outside world thus creating greater risk due to external threats. Albeit, the availability of ICS/SCADA systems is critical to assuring safety, security and profitability. Such systems form the backbone of our national cyber-physical infrastructure. Herein, we extend the concept of mean failure cost (MFC) to address quantifying availability to harmonize well with ICS security risk assessment. This new measure is based on the classic formulation of Availability combined with Mean Failure Cost (MFC). The metric offers a computational basis to estimate the availability of a system in terms of the loss that each stakeholder stands to sustain as a result of security violations or breakdowns (e.g., deliberate malicious failures).
PL
Niniejsza publikacja opisuje system pozwalający na generacje w pełni zanonimizowanych testowych danych dotyczących statystyk ruchu w witrynie internetowej. W artykule pokazano jak w prosty sposób zintegrować aplikację Java z przykładowym systemem Google oraz jak przenieść dane z systemu Google do bazy danych Oracle. Zadanie to zrealizowano na podstawie systemu monitoringu i analizy witryn internetowych Google Analytics, a migrację oparto na przykładzie standalone'owej aplikacji napisanej w języku Java.
EN
The chapter presents flexible and comprehensive test set generator based on website traffic acquired from Google Analytics via Java Google Analytics API. This paper takes into consideration the problem of test set data preparation. Every research needs an experimental verification. In most cases, there is a need to define own test data set or use a commercial one. The chapter presents flexible and comprehensive test set generator based on real user website traffic acquired from Google Analytics via Java Google Analytics API. Due to the fact that the author of this chapter deals with the issues of imprecise queries (fuzzy queries [1] [2], clustering algorithms [3]), a set of test data should be reflecting the actual data from the real database system.
18
Content available remote A New Intrusion Detection Model Based on Data Mining and Neural Network
EN
Today, we often apply the intrusion detection to aid the firewall to maintain the network security. But now network intrusion detection have problem of higher false alarm rate, we apply the data warehouse and the data mining in intrusion detection and the technology of network traffic monitoring and analysis. After network data is processed by data mining, we will get the certain data and the uncertain data. Then we process the data by the BP neural network, which based on the genetic algorithm, again. Finally, we propose a new model of intrusion detection based on the data warehouse, the data mining and the BP neural network. The experimental result indicates this model can find effectively many kinds behavior of network intrusion and have higher intelligence and environment accommodation.
PL
Obecnie, w celu utrzymania bezpieczeństwa sieci, stosuje się wykrywanie ataków przy pomocy zapory ogniowej, co często powoduje za wysoki poziom fałszywych ataków. W proponowanym rozwiązaniu proponuje się wykorzystanie magazynowania i pozyskiwania danych oraz analizę monitoringu ruchu sieci. Przetwarzanie danych polegało dotychczas na ustaleniu danych pewnych i niepewnych; obecnie proponujemy wykorzystanie genetycznego algorytmu sieci neuronowych BP. Ostatecznie, wprowadzono nowy model detekcji ataków bazujący na magazynowaniu i pozyskiwaniu danych oraz neuronowych sieciach BP. Badania eksperymentalne wykazują, że zaprezentowany model pozwala na znalezienie wielu rodzajów zachowań ataków sieci, jest bardziej inteligentny, zapewnia wyższy standard obsługi środowiska.
19
Content available remote Pozyskiwanie informacji w systemach ITS
PL
W artykule zwrócono uwagę na aspekt pozyskiwania danych w systemach ITS. W pierwszej części podkreślono znaczenie informacji w systemach transportowych i wskazano wpływ telekomunikacji na sprawne funkcjonowanie transportu. Następnie krótko opisano ewolucję metod pozyskiwania danych w systemach telematycznych na tle ogólnego przepływu informacji ITS. W dalszej części artykułu zaproponowano model matematyczny wspomagający ocenę poprawności doboru środków umożliwiających wskazane w tytule zadanie.
EN
The article focuses on the aspect of data acquisition in ITS systems. The first part emphasizes importance of information transport systems and indicated the impact of telecommunications on transport efficiency. In the short describes the evolution of data collection methods in telematics systems on a background of general flow of ITS. The rest of this article proposes a mathematical model to assist assessing the selection of measures for the task indicated in the title.
EN
The article discusses the possibilities of employing an algorithm based on the Rough Set Theory for generating engineering knowledge in the form of logic rules. The logic rules were generated from the data set characterizing the influence of process parameters on the ultimate tensile strength of austempered ductile iron. The paper assesses the obtained logic rules with the help of the rule quality evaluation measures, that is, with the help of the measures of confidence, support, and coverage, as well as the proposed rule quality coefficient.
first rewind previous Strona / 2 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.