Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 123

Liczba wyników na stronie
first rewind previous Strona / 7 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  data processing
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 7 next fast forward last
PL
Artykuł koncentruje się na kluczowej roli sanityzacji informacji w kontekście bezpilotowych systemów ewakuacji medycznej, szczególnie podczas operacji na polu walki. Analizuje znaczenie procesów filtrowania, analizy i przetwarzania danych taktycznych w ramach systemu MEWA MED, mających na celu zapewnienie sprawnego koordynowania działań i szybkiego reagowania na sytuacje zagrożenia. Ponadto, artykuł prezentuje potencjał oraz aktualny poziom zaawansowania prac przemysłu w obszarze sanityzacji danych, oraz możliwości współpracy bezpilotowych systemów ewakuacji medycznej z bramami międzysystemowymi. Dodatkowo omawia funkcjonalności Modułu MEWA MED, rozszerzonego o elementy charakterystyczne dla systemów klasy BMS, co pozwala na lepsze zrozumienie synergii między tymi technologiami i ich potencjalnego wpływu na efektywność działań na polu walki.
EN
The article focuses on the key role of information sanitization in the context of unmanned medical evacuation systems, particularly during battlefield operations. It analyzes the importance of the processes of filtering, analyzing and processing tactical data within the framework of MEWAMED systems, aimed at ensuring efficient coordination of operations and rapid response to emergency situations. In addition, the article presents the potential and current level of progress of the industry in the area of data sanitization, and the possibility of cooperation of unmanned medical evacuation systems with inter-system gateways. In addition, it discusses the functionality of the MEWAMED Module, expanded to include elements specific to BMS-class systems, which allows for a better understanding of the synergies between these technologies and their potential impact on the effectiveness of battlefield operations.
EN
Purpose: The main purpose of the study was to develop and demonstrate a concept enabling application of popular and commonly available online forms combined with a spreadsheet to support data collection and processing under the ABCD (Suzuki) method. Design/methodology/approach: The factors which determine the various ways in which the method in question is applied were first identified, and then it was established which of them affected the manner in which the form to be filled by experts is designed. Different variants of the method were identified on such a basis. For individual variants, the possibility of using different types of questions was discussed by considering the features available in the most popular and free-of-charge solutions enabling online forms to be developed. Diverse data layouts were also identified to establish the frameworks in which data are represented in spreadsheet files. Solutions which make it possible to automatically produce the consolidated reports required for purposes of the ABCD method were identified for each of the data layouts originally defined. Findings: When combined with a spreadsheet, popular online forms enable highly efficient data collection and processing with the ABCD method in use. Where the said method is applied according to the variant in which every cause is rated, an adequate data collection form can be created using both the online form solutions subject to analysis. If the method is applied according to the variant in which every rating must be used precisely once, developing a useful tool becomes significantly more complicated. Where this is the case, one can create a suitable form to validate the input data only by using the solution delivered by Google. Additionally, the layout of such a form must be reversed compared to the traditional form functioning under the ABCD method. Considering the diverse variants of the ABCD method linked with various kinds of questions used to build the form, 3 different layouts of the data collected by means of a spreadsheet were identified. With respect to each of them, one can devise a solution to ensure automated generation of the consolidated reports typical of the method in question. Practical implications: The solution proposed in the paper can be applied in practice while using the ABCD (Suzuki) method. Originality/value: The concept described in the paper is the author’s signature solution.
3
Content available Data processing for oil spill domain movement models
EN
This chapter reviews various data processing techniques for modelling the movement of oil spills, including data acquisition, quality control, and pre-processing. It highlights the importance of incorporating both physical and environmental factors such as wind, currents, and water temperature, in oil spill trajectory prediction models. It also discusses the challenges associated with data processing, including data availability and uncertainty. It emphasizes the significance of sound data processing practices to ensure effective response planning and mitigation efforts. Finally, by discussing the potential areas of improvement, and model assumptions and limitations, the chapter aims to inspire further research and development in the field, which can lead to constructing more accurate and reliable oil spill movement models.
EN
The object of research is the development of a specialized measuring information system for the study and control of relaxation processesin materials and technical systems.The purpose of the work isthe use of computer technologies to eliminate routine operations associated withthe processing of experimental data, increase the speed, accuracy and information content of the process of studying the control of gas sensors.A variant of using computer data processing to automate the processing and primary analysis of experimental data of scientific research and controlof the physicochemical parameters of gas-sensitive materials is proposed. The developed computer data processing system provides a practical opportunity to use the measurements of the kinetic characteristics of the gas sensitivity of gas sensors for their experimental research and controland, thus, to achieve higher accuracy and information content.The testing of the developed information-measuring system confirmed its operabilityand compliance with the requirements for improving the accuracy and speed of the processing process.
PL
Przedmiotem badań jest opracowanie specjalistycznego systemu informacji pomiarowej do badania i kontroli procesów relaksacyjnychw materiałach i systemach technicznych. Celem pracy jest wykorzystanie technologii komputerowych do wyeliminowania rutynowych operacji związanych z przetwarzaniem danych eksperymentalnych, zwiększenia szybkości, dokładności i zawartości informacyjnej procesu badania kontroli czujników gazu. Zaproponowano wariant wykorzystania komputerowego przetwarzania danych do automatyzacji przetwarzania i podstawowej analizy danych eksperymentalnych badań naukowych i kontroli parametrów fizykochemicznych materiałów wrażliwych na gaz. Opracowany komputerowy system przetwarzania danych zapewnia praktyczną możliwość wykorzystania pomiarów charakterystyk kinetycznych wrażliwości czujników gazu do ich badań eksperymentalnych i kontroli, a tym samym do osiągnięcia wyższej dokładności i zawartości informacyjnej. Testy opracowanego systemu pomiaru informacji potwierdziły jego funkcjonalność i zgodność z wymaganiami dotyczącymi poprawy dokładności i szybkości procesu przetwarzania.
EN
The suitability of several low-labor geostatistical procedures in the interpolation of highly positively skewed seismic data distributions was tested in the Baltic Basin. These procedures were a combination of various estimators of the model of spatial variation (theoretical variogram) and kriging techniques, together with the initial data transformation to normal distribution or lack thereof. This transformation consisted of logarithmization or normalization using the anamorphosis technique. Two variations of the theoretical variogram estimator were used: the commonly used classical Matheron estimator and the inverse covariance estimator (InvCov), which is robust with regard to non-ergodic data. It was expected that the latter would also be resistant to strongly skewed data distributions. The kriging techniques used included the commonly used ordinary kriging, simple kriging useful for standardized data and the non-linear median indicator kriging technique. It was confirmed that normalization (anamorphosis) is the most useful and less laborious geostatistical procedure of those suitable for such data, which results in a standardized normal distribution. The second, not obvious statement for highly skewed data distributions suggests that the non-ergodic inverted covariance (InvCov) estimator of variogram has an advantage over the Matheron’s estimator. It gives a better assessment of the C0 (nugget effect) and C (sill) parameters of the spatial variability model. Such a conclusion can be drawn from the fact that the higher the estimation of the relative nugget effect L = C0/(C0 + C) using the InvCov estimator, the weaker the correlation between the kriging estimates and the observed values. The values of the coefficient L estimates obtained by using the Matheron’s estimator do not meet this expectation.
PL
W ramach studium przypadku w rejonie basenu bałtyckiego przetestowano przydatność kilku mało pracochłonnych procedur geostatystycznych do interpolacji silnie skośnych rozkładów danych sejsmicznych. Były one kombinacją różnych estymatorów modelu zmienności przestrzennej (wariogramu teoretycznego) i technik krigingu, wraz ze wstępną transformacją danych do rozkładu normalnego lub jej brakiem. Transformacja ta polegała na logarytmowaniu bądź na normalizacji z użyciem techniki anamorfozy. Zastosowano dwie odmiany estymatora wariogramu teoretycznego: powszechnie stosowany klasyczny estymator Matherona oraz estymator odwróconej kowariancji (InvCov) odporny na dane nieergodyczne. Spodziewano się, że ten drugi okaże się również odporny na silnie skośne rozkłady dane. Wśród zastosowanych technik krigingu znalazł się powszechnie stosowany kriging zwyczajny, kriging prosty użyteczny dla danych zestandaryzowanych i nieliniowa technika krigingu wskaźnikowego. Najbardziej użyteczną i mało pracochłonną procedurą geostatystyczną, nadającą się do zastosowania w przypadku takich danych, okazała się normalizacja (anamorfoza), w efekcie której uzyskuje się rozkład normalny standaryzowany. Drugim, nieoczywistym wnioskiem dla silnie skośnych rozkładów danych, jest sugestia, iż estymator InvCov ma przewagę nad estymatorem Matherona, ponieważ pozwala na bardziej realistyczną ocenę parametrów C0 (efektu samorodka) i C (wariancji progowej) modelu zmienności przestrzennej. Taki wniosek można wyciągnąć z faktu, że im wyższa wartość relatywnego efektu samorodków L = C0/(C0 + C) obliczona za pomocą estymatora InvCov, tym słabsza korelacja między wartościami obliczonymi a danymi. Wartości współczynnika L obliczone za pomocą estymatora Matherona nie posiadają tej właściwości.
EN
As Earth observation technology has advanced, the volume of remote sensing big data has grown rapidly, offering significant obstacles to efficient and effective processing and analysis. A convolutional neural network refers to a neural network that covers convolutional calculations. It is a form of deep learning, and convolutional neural networks have characterization learning characteristics, which can classify information into different data. Remote Sensing Data Processing from various sensors has been attracting with more information in Remote Sensing. Remote sensing data is generally adjusted and refined through image processing. Image processing techniques, such as filtering and feature detection, are ideal for dealing with the high-dimensionality of geographically distributed systems. The geological entity is a term in geological work which refers to the product of geological processes that occupy a certain space in the Earth’s crust and are different from other materials. They are of different sizes and are divided into different types according to their size. It mainly focuses on improving classification accuracy and accurately describing scattering types. For geological entity recognition, this paper proposed a Deep Convolutional Neural Network Polarized Synthetic Aperture Radar (DCNN-PSAR). It is expected to use deep convolutional neural network technology and polarized SAR technology to explore new methods of geological entities and improve geological recognition capabilities. With the help of Multimodal Remote Sensing Data Processing, it is now possible to characterize and identify the composition of the Earth’s surface from orbital and aerial platforms. This paper proposes a ground object classification algorithm for polarized SAR images based on a fully convolutional network, which realizes the geological classification function and overcomes the shortcomings of too long. The evaluation of DCNN-PSAR shows that the accuracy of the water area is showing a rising trend, and the growth rate is relatively fast in the early stage, which directly changes from 0.14 to 0.6. Still, the increase is slower in the later stage. DCNN-PSAR achieves the highest quality of remote sensing data extraction.
EN
Purpose: The main purpose of the research is to devise and present a concept for a solution enabling integration of popular off-the-shelf online forms with a tool aligned with the MiRel concept used for quality measurement by application of the SERQUAL method. Design/methodology/approach: The analysis performed by the author comprised various possibilities of using standard features of popular online forms to store data for purposes of the SERVQUAL method. This involved identification of several potential layouts of the master table where the answers previously received are kept. The analysis concerned the data structure applied in the tool designed, as proposed in the literature, in accordance with the MiRel concept, to support the method in question. The elements identified in this structure were the attributes whose values should be entered directly and manually in tables as well as those whose values should be added automatically on the basis of the answers previously received. Solutions were developed to enable automatic data migration from the master table to the tool’s respective tables. Findings: The data required for purposes of the SERVQUAL analysis, supported by a tool created in a spreadsheet according to the MiRel concept, can be successfully stored by means of commonly available online forms. What proves to be problematic is the impossibility of verifying the correctness of the answers in terms of the relevance of individual dimensions, yet in this respect both the verification and potential adjustment of the answers received can be inherent in the mechanism responsible for data migration from the master table to the tool’s tables. A fully functional solution enabling data to be retrieved from the master table and moved to the tool’s tables can be developed using built-in spreadsheet features only, without the need for any code created in any programming language. Practical implications The solution proposed in the paper can be used in practice when measuring quality using the SERVQUAL method. Originality/value: The concept described in the paper is the author’s original solution.
EN
Basic information about the network monitoring process is introduced. Two monitoring methods for data collection from network devices are distinguished. Logs and metrics are described as the elements containing information about the current state of the network. A description of metropolitan networks in Poland, the solutions they apply and the specificity of the network are presented. The monitoring systems are discussed in terms of the scope of collected and processed data. The analysis of the collection and processing of network device data and the impact on its load is presented. For this purpose, the statistical data collected by Juniper MX router concerned the system load are processed. Moreover, the measurement metric used and the obtained results for the selected network device are presented. Finally, the conclusions are discussed in terms of monitoring and warning systems implementation.
PL
Głównym celem niniejszego artykułu jest zaprezentowanie wpływu autorskich poprawek domyślnej konfiguracji na szybkość przetwarzania danych przez Apache NiFi. Dodatkowo zbadano jak skaluje się wydajność wraz ze wzrostem liczby węzłów w klastrze obliczeniowym. Uzyskane wyniki szczegółowo przeanalizowano pod kątem wydajności oraz wartości kluczowych wskaźników.
EN
The main purpose of this article is to present the impact of authors’ tweaks to the default configuration on the processing speed of Apache NiFi. Additionally, how the performance scales with the increasing number of nodes in a computing cluster has been examined. Achieved results were thoroughly analyzed in terms of performance and key indicators.
10
Content available remote Przewidzieć przyszłość i zapobiec problemom dzięki sztucznej inteligencji
PL
Celem prac rozwojowych w zakresie AI jest uzyskanie poziomu, który pozwoli na pewną projekcję przyszłości i w związku z tym podejmowanie autonomicznych decyzji. Sztuczna inteligencja jest więc w stanie w pewien sposób przewidzieć przyszłość i pomóc w rozwiązaniu problemów, z jakimi borykają się firmy. W jaki sposób to robi?
EN
Purpose: The study aims to diagnose the corrosion current density in the coating defect on the outer surface of the ammonia pipe depending on the distance to the pumping station, taking into account the interaction of media at the soil-steel interface and using modern graphical data visualization technologies and approaches to model such a system. Design/methodology/approach: The use of an automated system for monitoring defects in underground metallic components of structures, in particular in ammonia pipelines, is proposed. The use of the information processing approach opens additional opportunities in solving the problem of defect detection. Temperature and pressure indicators in the pipeline play an important role because these parameters must be taken into account in the ammonia pipeline for safe transportation. The analysis of diagnostic signs on the outer surface of the underground metallic ammonia pipeline is carried out taking into account temperature changes and corrosion currents. The parameters and relations of the mathematical model for the description of the influence of thermal processes and mechanical loading in the vicinity of pumping stations on the corresponding corrosion currents in the metal of the ammonia pipeline are offered. Findings: The paper evaluates the corrosion current density in the coating defect on the metal surface depending on the distance to the pumping station and the relationship between the corrosion current density and the characteristics of the temperature field at a distance L = 0…20 km from the pumping station. The relative density of corrosion current is also compared with the energy characteristics of the surface layers at a distance L = 0…20 km from the pumping station. An information system using cloud technologies for data processing and visualization has been developed, which simplifies the process of data analysis regarding corrosion currents on the metal surface of an ammonia pipeline. Research limitations/implications: The study was conducted for the section from the pumping station to the pipeline directly on a relatively small data set. Practical implications: The use of client-server architecture has become very popular, thanks to which monitoring can be carried out in any corner of the planet, using Internet data transmission protocols. At the same time, cloud technologies allow you to deploy such software on remote physical computers. The use of the Amazon Web Service cloud environment as a common tool for working with data and the ability to use ready-made extensions is proposed. Also, this cloud technology simplifies the procedure of public and secure access to the collected information for further analysis. Originality/value: Use of cloud environments and databases to monitor ammonia pipeline defects for correct resource assessment.
EN
The paper considers the problem of information credibility. Currently, such a problem is affecting scientists, as well as ordinary people who are dependent on information networks. Hence, the Author formulates three postulates that should be observed in dealing with the quality of information: P1 – identify the source of information, P2 – determine the level of credibility of the information source, P3 – recognize the purpose of information dissemination. The first two postulates are universal because they are applicable to all the users of information. The third becomes more and more important in the social and political choices of citizens. In scientific work, empirical facts are being transformed to empirical data (increasingly, to the form of big data) which are results of advanced registration and processing by means of technical and information science tools, such as: a) technical transforming the empirical signal into information; b) statistical selection of signals, and, next, statistical processing of the received data; c) assessment of results for suitability in applications. Other “epistemic” factors, however, are also involved, as: d) conceptual apparatus used for idealization (and then for interpretation), e) assessment of the results in terms of compliance with the epistemological (sometimes, also commercial or ideological) position. All these factors should be the subject of careful study of errology proposed by P. Homola.
PL
Rozważany jest problem wiarygodności informacji. Problem ten dotyka zarówno naukowców, jak i zwykłych ludzi zależnych od sieci informacyjnych. Sformułowano zatem trzy postulaty, które są niezbędne do dbania o jakość informacji: P1 – zidentyfikowanie źródła informacji, P2 – określenie poziomu wiarygodności źródła informacji, P3 – ustalenie celu rozpowszechniania informacji. Pierwsze dwa postulaty są uniwersalne, ponieważ dotyczą wszystkich użytkowników informacji. Trzeci staje się coraz ważniejszy w wyborach społecznych i politycznych obywateli. W pracy naukowej fakty empiryczne przekształcane są w dane empiryczne (coraz częściej w postaci tzw. big data), które są wynikiem coraz bardziej zaawansowanego rejestrowania i przetwarzania za pomocą narzędzi technicznych i informatycznych, takich jak: a) techniczne przekształcenie sygnału empirycznego w informację, b) statystyczny dobór sygnałów i dalsze przetwarzanie statystyczne otrzymanych danych, c) ocena wyników pod kątem przydatności w zastosowaniach. W grę wchodzą jednak także inne czynniki „epistemiczne”, takie jak: d) aparat pojęciowy wykorzystywany do idealizacji (a następnie interpretacji), e) ocena wyników pod kątem zgodności z pozycją epistemologiczną (czasem także komercyjną i ideologiczną). Wszystkie te czynniki powinny być przedmiotem dokładnego studium tzw. errologii zaproponowanej przez P. Homola.
EN
High values of salt rock mass convergence might cause serious problems with maintenance of shaft lining located in salt rock sections. The most efficient existing method of negative convergence influence prevention is periodic removal of creeping salt from shaft walls. However, the process of salt removal is problematic in terms of shaft and hoisting system typical operation. A new shaft lining idea allows removal of creeping salt by leaching without the need for stopping the shaft operation. Following paper presents a software, developed in LabVIEW environment and applied in the framework of test facility, designed for purpose of verification of theoretical assumptions of new construction of shaft lining. Developed software consists of applications for data acquisition, based on Event-Driven Queued State Machine pattern, and data processing, designed as Producer-Consumer pattern.
15
Content available remote Sentinel w zasięgu ręki
16
Content available remote Analizy na sterydach
17
Content available Utrzymanie ruchu a Przemysł 4.0
PL
Utrzymanie ruchu (UR) to codzienna, systematyczna praca, związana z wykonywaniem zadań w celu zapobiegania degradacji stanu technicznego maszyn i urządzeń oraz występowaniu awarii, a gdy już do nich dojdzie – usuwanie degradacji w celu przywrócenia środkom produkcji ich możliwie najlepszej funkcjonalności.
18
Content available Exhaled breath analysis by resistive gas sensors
EN
Breath analysis has attracted human beings for centuries. It was one of the simplest methods to detect various diseases by using human smell sense only. Advances in technology enable to use more reliable and standardized methods, based on different gas sensing systems. Breath analysis requires the detection of volatile organic compounds (VOCs) of the concentrations below individual ppm (parts per million). Therefore, advanced detection methods have been proposed. Some of these methods use expensive and bulky equipment (e.g. optical sensors, mass spectrometry - MS), and require time-consuming analysis. Less accurate, but much cheaper, are resistive gas sensors. These sensors use porous materials and adsorption-desorption processes, determining their physical parameters. We consider the problems of applying resistive gas sensors to breath analysis. Recent advances were underlined, showing that these economical gas sensor scan be efficiently employed to analyse breath samples. General problems of applying resistive gas sensors are considered and illustrated with examples, predominantly related to commercial sensors and their long-term performance. A setup for collection of breath samples is considered and presented to point out the crucial parts and problematic issues.
19
Content available A quaternion clustering framework
EN
Data clustering is one of the most popular methods of data mining and cluster analysis. The goal of clustering algorithms is to partition a data set into a specific number of clusters for compressing or summarizing original values. There are a variety of clustering algorithms available in the related literature. However, the research on the clustering of data parametrized by unit quaternions, which are commonly used to represent 3D rotations, is limited. In this paper we present a quaternion clustering methodology including an algorithm proposal for quaternion based k-means along with quaternion clustering quality measures provided by an enhancement of known indices and an automated procedure of optimal cluster number selection. The validity of the proposed framework has been tested in experiments performed on generated and real data, including human gait sequences recorded using a motion capture technique.
20
Content available remote Przyszłość w chmurze?
first rewind previous Strona / 7 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.