Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 132

Liczba wyników na stronie
first rewind previous Strona / 7 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  data processing
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 7 next fast forward last
EN
When implementing energy saving measures, the key is the correct choice of thermal insulation materials, the main characteristic of which is the thermal conductivity coefficient. Missing part of the data, which may occur during investigation of materials in natural conditions, can lead to incorrect determination of the corresponding characteristic, which negatively affects the effectiveness of the implemented measures and energy saving. Therefore, reconstruction of the missing data at the stage of preliminary processing of measured signals to obtain complete and accurate data when determining the thermal conductivity of thermal insulation materials will avoid this situation. The article presents the results of regression analysis of data obtained during express control of thermal conductivity of thermal insulation materials based on the local thermal impact method. Regression models were built for signal reconstruction with 10%, 20% and 30% missing data, using which a relative error of determination the thermal conductivity coefficient of less than 8% was obtained. This is acceptable for express control of thermal conductivity and indicates the correctness of data restoration in this way. In addition, an algorithm is provided for determining signal stationarity, which allows to reasonably reduce the duration of each material with a given level of permissible error.
EN
Purpose: The main research purpose underlying this paper has been to develop and present solutions intended to support validation of the data entered in a tool created in line with the MiRel concept to help in analysing employee participation in the implementation of projects. Design/methodology/approach: The study comprises analysing the manner in which the tool in question can be used and identifying the successive steps taken in order to generate a report of an intended format. Potential problems pertaining to the validity of the data entered in the tool, which may be encountered while individual steps are performed, have been defined. A supporting solution has been developed with reference to each of the problems identified as a means to ensure data validity. Furthermore, collective information panels have been proposed to inform the user about the potential occurrence of any of the problems previously defined. Findings: Data validation supporting solutions can narrow down the range of values admissible to be entered by users or explicitly inform them that the values being entered are invalid. The range of admissible values can also be limited by means of the Data Validation mechanism combined with the Name Manager. The solutions intended to provide information about existing errors can make use of the Conditional Formatting mechanism, coupled with formulas based on standard built-in spreadsheet functions. Practical implications: The solutions proposed in this paper support data validation in the tool in question. There are analogous solutions which can be applied in similar tools aimed at supporting data processing in a different scope within organisations. Social implications: Once successfully applied, the solutions proposed in this paper make the tool in question more reliable and user-friendly. They make the tool easier to use to support data processing in organisations and less prone to problems. Originality/value: The concept described in the paper is the author’s original solution.
PL
Dzisiaj dane to główny zasób ludzkości. Każda era ma swoje zasoby; kiedyś ludzie gromadzili skóry, bydło, potem węgiel czy stal. Dzisiaj siłą każdej firmy, koncernu czy państwa jest zdolność do szybkiego analizowania gigantycznych ilości danych. Dane te muszą być przeanalizowane tu i teraz, natychmiast, ponieważ za kilka godzin informacje zawarte w źródłach danych nie będą miały już żadnej wartości.
PL
Artykuł koncentruje się na kluczowej roli sanityzacji informacji w kontekście bezpilotowych systemów ewakuacji medycznej, szczególnie podczas operacji na polu walki. Analizuje znaczenie procesów filtrowania, analizy i przetwarzania danych taktycznych w ramach systemu MEWA MED, mających na celu zapewnienie sprawnego koordynowania działań i szybkiego reagowania na sytuacje zagrożenia. Ponadto, artykuł prezentuje potencjał oraz aktualny poziom zaawansowania prac przemysłu w obszarze sanityzacji danych, oraz możliwości współpracy bezpilotowych systemów ewakuacji medycznej z bramami międzysystemowymi. Dodatkowo omawia funkcjonalności Modułu MEWA MED, rozszerzonego o elementy charakterystyczne dla systemów klasy BMS, co pozwala na lepsze zrozumienie synergii między tymi technologiami i ich potencjalnego wpływu na efektywność działań na polu walki.
EN
The article focuses on the key role of information sanitization in the context of unmanned medical evacuation systems, particularly during battlefield operations. It analyzes the importance of the processes of filtering, analyzing and processing tactical data within the framework of MEWAMED systems, aimed at ensuring efficient coordination of operations and rapid response to emergency situations. In addition, the article presents the potential and current level of progress of the industry in the area of data sanitization, and the possibility of cooperation of unmanned medical evacuation systems with inter-system gateways. In addition, it discusses the functionality of the MEWAMED Module, expanded to include elements specific to BMS-class systems, which allows for a better understanding of the synergies between these technologies and their potential impact on the effectiveness of battlefield operations.
EN
The article is based on practical experience and research, presenting the author's concept of applying the principles of cybersecurity of IT/OT systems in key functional areas of a mining plant operating based on the idea of INDUSTRY 4.0.In recent years, cyberspace has become a new security environment, which has introduced significant changes in both the practical, and legal and organizational aspects of the operation of global security systems. In this context, it is particularly important to understand the dynamics of this environmental change (both in the provisions of the NIS 2 directive and the KSC Act) [1]. Building a legal system as a national response to the opportunities and challenges related to its presence in cyberspace was an extremely complex task. This results not only from the pace of technological change, but also from the specificity of the environment and its "interactivity". The trend in international law that has emerged during COVID-19 and the current geopolitical situation is to treat organizations from the mining and energy sector as one of the important actors in national and international relations [2].The new regulations introduce and expand international cooperation between individual entities and regulate security strategies and policies, which should take into account the recommendations of the Ministry of Climate and Environment, with particular emphasis on, among others, ensuring the continuity of system operation, handling security incidents and constantly increasing awareness of cybersecurity and cyber threats. It should not be forgotten that threats in cyberspace represent a different class of organizational challenges, largely similar to those posed by other asymmetric threats such as terrorism. Their common feature is that they require less hierarchical and more flexible solutions on state structures. Cybersecurity, both socially and technologically, with all its consequences, emerges as one of the most important concepts of the security paradigm at the national and international level [3].
EN
The article presents an innovative approach to the process of planning, renovating, and modernizing buildings by using vision techniques for measuring and imaging detailed and volumetric objects. We present the advantages and application potential of vision-based measurement techniques in the context of growing needs for implementing fast and accurate methods for diagnosing the technical condition of objects. Through several examples, we discuss various 3D scanning techniques and methods of processing digital data that can be used to recreate project documentation and numerical simulations.
PL
Artykuł prezentuje nowatorskie podejście do procesu planowania, renowacji i modernizacji budynków przez zastosowanie technik wizyjnych do pomiaru i obrazowania obiektów detalicznych oraz kubaturowych. Przedstawiamy zalety i potencjał aplikacyjny wizyjnych technik pomiarowych w kontekście rosnących potrzeb na wdrożenie szybkich i dokładnych metod diagnostyki stanu technicznego obiektów. Na przykładach omawiamy różne techniki skanowania 3D oraz metody obróbki danych cyfrowych, które mogą być wykorzystane do odtworzenia dokumentacji projektowej i symulacji numerycznych.
PL
Celem niniejszej pracy jest analiza efektywności przetwarzania danych z użyciem Apache Hive i Apache Pig w środowisku Hadoop. Analiza polegała na porównaniu pomiędzy obydwoma wspomnianymi narzędziami z użyciem dużych zbiorów danych, w formie 28 milionów rekordów. Badanie zostało przeprowadzone z użyciem skryptów i zapytań przeznaczonych dla Apache Hive oraz Apache Pig, a następnie wykonanie dziesięciokrotnie na środowisku dostarczonym dzięki utworzonej maszynie wirtualnej. Wymienione metody zostały uskutecznione na tych samych zbiorach danych 16 razy, zgodnie z uprzednio przygotowanymi scenariuszami badawczymi. W rezultacie autorzy zaobserwowali, iż Apache Hive jest bardziej efektywnym narzędziem, niż Apache Pig.
EN
The aim of this paper is the analysis of data processing efficiency with use of Apache Hive and Apache Pig in Hadoop environment. The analysis was based on comparison between both mentioned tools with use of large data set, represented by 28 million records. Research was provided with use of scripts and queries destined for Apache Hive and Apache Pig, and then executed 10 times on environment brought by created virtual machine. Those methods were performed on the same data sets for 16 times according to previously prepared research scenarios. As the conclusion, authors had observed that Apache Hive is more efficient tool, than Apache Pig.
EN
In the article, the patterns of movement of rolls of long-fiber plant crops on an inclined plane are investigated. Experimental data on determining the rolling time of rolls on an inclined plane with angles of inclination of 25° and 10° to the horizon for rolls of different mass and radius are processed. An analysis and investigation of the patterns of movement of these rolls have been carried out, including angular velocity, velocity of roll centers, rotation angle, and kinetic energy of the rolls.
PL
W artykule zbadano wzorce ruchu zwojów roślin długowłóknistych na pochyłej płaszczyźnie. Przetworzono dane eksperymentalne dotyczące określania czasu walcowania zwojów na pochyłej płaszczyźnie o kątach nachylenia 25° i 10° do horyzontu dla zwojów o różnej masie i promieniu. Przeprowadzono analizę i badanie wzorców ruchu tych zwojów, w tym prędkości kątowej, prędkości środków zwojów, kąta obrotu i energii kinetycznej.
EN
Purpose: The main purpose of the study was to develop and demonstrate a concept enabling application of popular and commonly available online forms combined with a spreadsheet to support data collection and processing under the ABCD (Suzuki) method. Design/methodology/approach: The factors which determine the various ways in which the method in question is applied were first identified, and then it was established which of them affected the manner in which the form to be filled by experts is designed. Different variants of the method were identified on such a basis. For individual variants, the possibility of using different types of questions was discussed by considering the features available in the most popular and free-of-charge solutions enabling online forms to be developed. Diverse data layouts were also identified to establish the frameworks in which data are represented in spreadsheet files. Solutions which make it possible to automatically produce the consolidated reports required for purposes of the ABCD method were identified for each of the data layouts originally defined. Findings: When combined with a spreadsheet, popular online forms enable highly efficient data collection and processing with the ABCD method in use. Where the said method is applied according to the variant in which every cause is rated, an adequate data collection form can be created using both the online form solutions subject to analysis. If the method is applied according to the variant in which every rating must be used precisely once, developing a useful tool becomes significantly more complicated. Where this is the case, one can create a suitable form to validate the input data only by using the solution delivered by Google. Additionally, the layout of such a form must be reversed compared to the traditional form functioning under the ABCD method. Considering the diverse variants of the ABCD method linked with various kinds of questions used to build the form, 3 different layouts of the data collected by means of a spreadsheet were identified. With respect to each of them, one can devise a solution to ensure automated generation of the consolidated reports typical of the method in question. Practical implications: The solution proposed in the paper can be applied in practice while using the ABCD (Suzuki) method. Originality/value: The concept described in the paper is the author’s signature solution.
10
Content available Data processing for oil spill domain movement models
EN
This chapter reviews various data processing techniques for modelling the movement of oil spills, including data acquisition, quality control, and pre-processing. It highlights the importance of incorporating both physical and environmental factors such as wind, currents, and water temperature, in oil spill trajectory prediction models. It also discusses the challenges associated with data processing, including data availability and uncertainty. It emphasizes the significance of sound data processing practices to ensure effective response planning and mitigation efforts. Finally, by discussing the potential areas of improvement, and model assumptions and limitations, the chapter aims to inspire further research and development in the field, which can lead to constructing more accurate and reliable oil spill movement models.
EN
The object of research is the development of a specialized measuring information system for the study and control of relaxation processesin materials and technical systems.The purpose of the work isthe use of computer technologies to eliminate routine operations associated withthe processing of experimental data, increase the speed, accuracy and information content of the process of studying the control of gas sensors.A variant of using computer data processing to automate the processing and primary analysis of experimental data of scientific research and controlof the physicochemical parameters of gas-sensitive materials is proposed. The developed computer data processing system provides a practical opportunity to use the measurements of the kinetic characteristics of the gas sensitivity of gas sensors for their experimental research and controland, thus, to achieve higher accuracy and information content.The testing of the developed information-measuring system confirmed its operabilityand compliance with the requirements for improving the accuracy and speed of the processing process.
PL
Przedmiotem badań jest opracowanie specjalistycznego systemu informacji pomiarowej do badania i kontroli procesów relaksacyjnychw materiałach i systemach technicznych. Celem pracy jest wykorzystanie technologii komputerowych do wyeliminowania rutynowych operacji związanych z przetwarzaniem danych eksperymentalnych, zwiększenia szybkości, dokładności i zawartości informacyjnej procesu badania kontroli czujników gazu. Zaproponowano wariant wykorzystania komputerowego przetwarzania danych do automatyzacji przetwarzania i podstawowej analizy danych eksperymentalnych badań naukowych i kontroli parametrów fizykochemicznych materiałów wrażliwych na gaz. Opracowany komputerowy system przetwarzania danych zapewnia praktyczną możliwość wykorzystania pomiarów charakterystyk kinetycznych wrażliwości czujników gazu do ich badań eksperymentalnych i kontroli, a tym samym do osiągnięcia wyższej dokładności i zawartości informacyjnej. Testy opracowanego systemu pomiaru informacji potwierdziły jego funkcjonalność i zgodność z wymaganiami dotyczącymi poprawy dokładności i szybkości procesu przetwarzania.
EN
The suitability of several low-labor geostatistical procedures in the interpolation of highly positively skewed seismic data distributions was tested in the Baltic Basin. These procedures were a combination of various estimators of the model of spatial variation (theoretical variogram) and kriging techniques, together with the initial data transformation to normal distribution or lack thereof. This transformation consisted of logarithmization or normalization using the anamorphosis technique. Two variations of the theoretical variogram estimator were used: the commonly used classical Matheron estimator and the inverse covariance estimator (InvCov), which is robust with regard to non-ergodic data. It was expected that the latter would also be resistant to strongly skewed data distributions. The kriging techniques used included the commonly used ordinary kriging, simple kriging useful for standardized data and the non-linear median indicator kriging technique. It was confirmed that normalization (anamorphosis) is the most useful and less laborious geostatistical procedure of those suitable for such data, which results in a standardized normal distribution. The second, not obvious statement for highly skewed data distributions suggests that the non-ergodic inverted covariance (InvCov) estimator of variogram has an advantage over the Matheron’s estimator. It gives a better assessment of the C0 (nugget effect) and C (sill) parameters of the spatial variability model. Such a conclusion can be drawn from the fact that the higher the estimation of the relative nugget effect L = C0/(C0 + C) using the InvCov estimator, the weaker the correlation between the kriging estimates and the observed values. The values of the coefficient L estimates obtained by using the Matheron’s estimator do not meet this expectation.
PL
W ramach studium przypadku w rejonie basenu bałtyckiego przetestowano przydatność kilku mało pracochłonnych procedur geostatystycznych do interpolacji silnie skośnych rozkładów danych sejsmicznych. Były one kombinacją różnych estymatorów modelu zmienności przestrzennej (wariogramu teoretycznego) i technik krigingu, wraz ze wstępną transformacją danych do rozkładu normalnego lub jej brakiem. Transformacja ta polegała na logarytmowaniu bądź na normalizacji z użyciem techniki anamorfozy. Zastosowano dwie odmiany estymatora wariogramu teoretycznego: powszechnie stosowany klasyczny estymator Matherona oraz estymator odwróconej kowariancji (InvCov) odporny na dane nieergodyczne. Spodziewano się, że ten drugi okaże się również odporny na silnie skośne rozkłady dane. Wśród zastosowanych technik krigingu znalazł się powszechnie stosowany kriging zwyczajny, kriging prosty użyteczny dla danych zestandaryzowanych i nieliniowa technika krigingu wskaźnikowego. Najbardziej użyteczną i mało pracochłonną procedurą geostatystyczną, nadającą się do zastosowania w przypadku takich danych, okazała się normalizacja (anamorfoza), w efekcie której uzyskuje się rozkład normalny standaryzowany. Drugim, nieoczywistym wnioskiem dla silnie skośnych rozkładów danych, jest sugestia, iż estymator InvCov ma przewagę nad estymatorem Matherona, ponieważ pozwala na bardziej realistyczną ocenę parametrów C0 (efektu samorodka) i C (wariancji progowej) modelu zmienności przestrzennej. Taki wniosek można wyciągnąć z faktu, że im wyższa wartość relatywnego efektu samorodków L = C0/(C0 + C) obliczona za pomocą estymatora InvCov, tym słabsza korelacja między wartościami obliczonymi a danymi. Wartości współczynnika L obliczone za pomocą estymatora Matherona nie posiadają tej właściwości.
13
Content available remote Multi-queue service for task scheduling based on data availability
EN
Large-scale computation (LSC) systems are often performed in distributed environments where message passing is the key to orchestrating computations. In this paper we present a new message queue concept developed within the context of an LSC system (BalticLSC). The concept consists in proposing a multi-queue, where queues are grouped into families. A queue family can be used to distribute messages of the same kind to multiple computation modules distributed between various nodes. Such message families can be synchronised to implement a mechanism for initiating computation jobs based on multiple data inputs. Moreover, the proposed multi-queue has built-in mechanisms for controlling message sequences in applications where complex data set splitting is necessary. The presented multi-queue concept was implemented and applied with success in a working LSC system.
EN
As Earth observation technology has advanced, the volume of remote sensing big data has grown rapidly, offering significant obstacles to efficient and effective processing and analysis. A convolutional neural network refers to a neural network that covers convolutional calculations. It is a form of deep learning, and convolutional neural networks have characterization learning characteristics, which can classify information into different data. Remote Sensing Data Processing from various sensors has been attracting with more information in Remote Sensing. Remote sensing data is generally adjusted and refined through image processing. Image processing techniques, such as filtering and feature detection, are ideal for dealing with the high-dimensionality of geographically distributed systems. The geological entity is a term in geological work which refers to the product of geological processes that occupy a certain space in the Earth’s crust and are different from other materials. They are of different sizes and are divided into different types according to their size. It mainly focuses on improving classification accuracy and accurately describing scattering types. For geological entity recognition, this paper proposed a Deep Convolutional Neural Network Polarized Synthetic Aperture Radar (DCNN-PSAR). It is expected to use deep convolutional neural network technology and polarized SAR technology to explore new methods of geological entities and improve geological recognition capabilities. With the help of Multimodal Remote Sensing Data Processing, it is now possible to characterize and identify the composition of the Earth’s surface from orbital and aerial platforms. This paper proposes a ground object classification algorithm for polarized SAR images based on a fully convolutional network, which realizes the geological classification function and overcomes the shortcomings of too long. The evaluation of DCNN-PSAR shows that the accuracy of the water area is showing a rising trend, and the growth rate is relatively fast in the early stage, which directly changes from 0.14 to 0.6. Still, the increase is slower in the later stage. DCNN-PSAR achieves the highest quality of remote sensing data extraction.
EN
Purpose: The main purpose of the research is to devise and present a concept for a solution enabling integration of popular off-the-shelf online forms with a tool aligned with the MiRel concept used for quality measurement by application of the SERQUAL method. Design/methodology/approach: The analysis performed by the author comprised various possibilities of using standard features of popular online forms to store data for purposes of the SERVQUAL method. This involved identification of several potential layouts of the master table where the answers previously received are kept. The analysis concerned the data structure applied in the tool designed, as proposed in the literature, in accordance with the MiRel concept, to support the method in question. The elements identified in this structure were the attributes whose values should be entered directly and manually in tables as well as those whose values should be added automatically on the basis of the answers previously received. Solutions were developed to enable automatic data migration from the master table to the tool’s respective tables. Findings: The data required for purposes of the SERVQUAL analysis, supported by a tool created in a spreadsheet according to the MiRel concept, can be successfully stored by means of commonly available online forms. What proves to be problematic is the impossibility of verifying the correctness of the answers in terms of the relevance of individual dimensions, yet in this respect both the verification and potential adjustment of the answers received can be inherent in the mechanism responsible for data migration from the master table to the tool’s tables. A fully functional solution enabling data to be retrieved from the master table and moved to the tool’s tables can be developed using built-in spreadsheet features only, without the need for any code created in any programming language. Practical implications The solution proposed in the paper can be used in practice when measuring quality using the SERVQUAL method. Originality/value: The concept described in the paper is the author’s original solution.
EN
Basic information about the network monitoring process is introduced. Two monitoring methods for data collection from network devices are distinguished. Logs and metrics are described as the elements containing information about the current state of the network. A description of metropolitan networks in Poland, the solutions they apply and the specificity of the network are presented. The monitoring systems are discussed in terms of the scope of collected and processed data. The analysis of the collection and processing of network device data and the impact on its load is presented. For this purpose, the statistical data collected by Juniper MX router concerned the system load are processed. Moreover, the measurement metric used and the obtained results for the selected network device are presented. Finally, the conclusions are discussed in terms of monitoring and warning systems implementation.
PL
Głównym celem niniejszego artykułu jest zaprezentowanie wpływu autorskich poprawek domyślnej konfiguracji na szybkość przetwarzania danych przez Apache NiFi. Dodatkowo zbadano jak skaluje się wydajność wraz ze wzrostem liczby węzłów w klastrze obliczeniowym. Uzyskane wyniki szczegółowo przeanalizowano pod kątem wydajności oraz wartości kluczowych wskaźników.
EN
The main purpose of this article is to present the impact of authors’ tweaks to the default configuration on the processing speed of Apache NiFi. Additionally, how the performance scales with the increasing number of nodes in a computing cluster has been examined. Achieved results were thoroughly analyzed in terms of performance and key indicators.
18
Content available remote Przewidzieć przyszłość i zapobiec problemom dzięki sztucznej inteligencji
PL
Celem prac rozwojowych w zakresie AI jest uzyskanie poziomu, który pozwoli na pewną projekcję przyszłości i w związku z tym podejmowanie autonomicznych decyzji. Sztuczna inteligencja jest więc w stanie w pewien sposób przewidzieć przyszłość i pomóc w rozwiązaniu problemów, z jakimi borykają się firmy. W jaki sposób to robi?
EN
Purpose: The study aims to diagnose the corrosion current density in the coating defect on the outer surface of the ammonia pipe depending on the distance to the pumping station, taking into account the interaction of media at the soil-steel interface and using modern graphical data visualization technologies and approaches to model such a system. Design/methodology/approach: The use of an automated system for monitoring defects in underground metallic components of structures, in particular in ammonia pipelines, is proposed. The use of the information processing approach opens additional opportunities in solving the problem of defect detection. Temperature and pressure indicators in the pipeline play an important role because these parameters must be taken into account in the ammonia pipeline for safe transportation. The analysis of diagnostic signs on the outer surface of the underground metallic ammonia pipeline is carried out taking into account temperature changes and corrosion currents. The parameters and relations of the mathematical model for the description of the influence of thermal processes and mechanical loading in the vicinity of pumping stations on the corresponding corrosion currents in the metal of the ammonia pipeline are offered. Findings: The paper evaluates the corrosion current density in the coating defect on the metal surface depending on the distance to the pumping station and the relationship between the corrosion current density and the characteristics of the temperature field at a distance L = 0…20 km from the pumping station. The relative density of corrosion current is also compared with the energy characteristics of the surface layers at a distance L = 0…20 km from the pumping station. An information system using cloud technologies for data processing and visualization has been developed, which simplifies the process of data analysis regarding corrosion currents on the metal surface of an ammonia pipeline. Research limitations/implications: The study was conducted for the section from the pumping station to the pipeline directly on a relatively small data set. Practical implications: The use of client-server architecture has become very popular, thanks to which monitoring can be carried out in any corner of the planet, using Internet data transmission protocols. At the same time, cloud technologies allow you to deploy such software on remote physical computers. The use of the Amazon Web Service cloud environment as a common tool for working with data and the ability to use ready-made extensions is proposed. Also, this cloud technology simplifies the procedure of public and secure access to the collected information for further analysis. Originality/value: Use of cloud environments and databases to monitor ammonia pipeline defects for correct resource assessment.
first rewind previous Strona / 7 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.