Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!

Znaleziono wyników: 8

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  big data analysis
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
With the rapid development of economic globalisation, global economic and trade activities are escalating. However, environmental problems and the emergence of green economy, a response to these problems, has led to the widespread introduction of green trade barriers. These barriers implicitly limit the development of trade activities. This paper focuses on the export difficulties caused by green trade barriers and proposes a method to quantify discrete product characteristics, explore the internal characteristics of commodities and decide optimally on intended export regions. Firstly, the discrete feature of products is quantified by quantitative transformation method. Secondly, the quantitative data are used to derive the best decision for export regions through support vector regression (SVR) method. Particle swarm optimisation is used in optimising SVR parameters to achieve high-precision decision making. Comparison with historical data from the industry park shows the identification accuracy of the optimised SVR model to be better than that of the traditional regression model. This finding presents a novel perspective for developing import and export under the background of green trade barriers.
EN
Big data, artificial intelligence and the Internet of things (IoT) are still very popular areas in current research and industrial applications. Processing massive amounts of data generated by the IoT and stored in distributed space is not a straightforward task and may cause many problems. During the last few decades, scientists have proposed many interesting approaches to extract information and discover knowledge from data collected in database systems or other sources. We observe a permanent development of machine learning algorithms that support each phase of the data mining process, ensuring achievement of better results than before. Rough set theory (RST) delivers a formal insight into information, knowledge, data reduction, uncertainty, and missing values. This formalism, formulated in the 1980s and developed by several researches, can serve as a theoretical basis and practical background for dealing with ambiguities, data reduction, building ontologies, etc. Moreover, as a mature theory, it has evolved into numerous extensions and has been transformed through various incarnations, which have enriched expressiveness and applicability of the related tools. The main aim of this article is to present an overview of selected applications of RST in big data analysis and processing. Thousands of publications on rough sets have been contributed; therefore, we focus on papers published in the last few years. The applications of RST are considered from two main perspectives: direct use of the RST concepts and tools, and jointly with other approaches, i.e., fuzzy sets, probabilistic concepts, and deep learning. The latter hybrid idea seems to be very promising for developing new methods and related tools as well as extensions of the application area.
EN
This work presents an original model for detecting machine tool anomalies and emergency states through operation data processing. The paper is focused on an elastic hierarchical system for effective data reduction and classification, which encompasses several modules. Firstly, principal component analysis (PCA) is used to perform data reduction of many input signals from big data tree topology structures into two signals representing all of them. Then the technique for segmentation of operating machine data based on dynamic time distortion and hierarchical clustering is used to calculate signal accident characteristics using classifiers such as the maximum level change, a signal trend, the variance of residuals, and others. Data segmentation and analysis techniques enable effective and robust detection of operating machine tool anomalies and emergency states due to almost real-time data collection from strategically placed sensors and results collected from previous production cycles. The emergency state detection model described in this paper could be beneficial for improving the production process, increasing production efficiency by detecting and minimizing machine tool error conditions, as well as improving product quality and overall equipment productivity. The proposed model was tested on H-630 and H-50 machine tools in a real production environment of the Tajmac-ZPS company.
EN
In this paper effects of COVID–19 pandemic on stock market network are analyzed by an application of operational research with a mathematical approach. For this purpose two minimum spanning trees for each time period namely before and during COVID–19 pandemic are constructed. Dynamic time warping algorithm is used to measure the similarity between each time series of the investigated stock markets. Then, clusters of investigated stock markets are constructed. Numerical values of the topology evaluation for each cluster and time period is computed.
5
Content available remote Towards a Logic Programming Tool for Cancer Data Analysis
EN
The main goal of this work is to propose a tool-chain capable of analyzing a data collection of temporally qualified (genetic) mutation profiles, i.e., a collection of DNA-sequences (genes) that present variations with respect to their “healthy” versions. We implemented a system consisting of a front-end, a reasoning core, and a post-processor: the first transforms the input data retrieved from medical databases into a set of logical facts, while the last displays the computation results as graphs. Concerning the reasoning core, we employed the Answer Set Programming paradigm, which is capable of deducing complex information from data. However, since the system is modular, this component can be replaced by any logic programming tool for different kinds of data analysis. Indeed, we tested the use of a probabilistic inductive logic programming core.
6
Content available Dlaczego Big Data?
PL
Ilość danych jest ogromna i cały czas rośnie w szalonym tempie. Jednocześnie przybywa danych zbędnych, a wykonanie bardziej efektywnej, rzetelnej analizy wymaga ich przefiltrowania i usunięcia. Umiejętność wyodrębniania ze zbiorów danych prawidłowych i przydatnych informacji staje się czymś nieodzownym. Dzięki analizie Big Data przedsiębiorstwo zyskuje możliwość oddzielenia „ziarna od plew” i poszerzenia swojej początkowo dość wąskiej perspektywy. Istotą Big Data nie jest objętość (ilość) danych, szybkość ich przepływu ani różnorodność, lecz poszerzenie horyzontów myślowych oraz inne spojrzenie na dane. Chcesz zobaczyć cały las? To nie wychodź z niego, ale wespnij się na szczyt góry. Podobnie rzeczy mają się z Big Data. Szukasz istotnych informacji? Wzbij się niczym ptak w przestworza, a im wyżej się wniesiesz, tym szersze będzie twoje pole widzenia. Aby zobaczyć z zewnątrz to, czego się nie da uchwycić, pozostając wewnątrz, potrzebny jest punkt widzenia obejmujący cały las. I tutaj właśnie wkracza Big Data.
EN
In order to solve the problem that current avoidance method of shipwreck has the problem of low success rate of avoidance, this paper proposes a method of intelligent avoidance of shipwreck based on big data analysis. Firstly,our method used big data analysis to calculate the safe distance of approach of ship under the head-on situation, the crossing situation and the overtaking situation.On this basis, by calculating the risk-degree of collision of ships,our research determined the degree of immediate danger of ships.Finally, we calculated the three kinds of evaluation function of ship navigation, and used genetic algorithm to realize the intelligent avoidance of shipwreck.Experimental result shows that compared the proposed method with the traditional method in two in a recent meeting when the distance to closest point of approach between two ships is 0.13nmile, they can effectively evade.The success rate of avoidance is high.
EN
In order to improve the working stability of distributed marine green energy resources grid-connected system, we need the big data information mining and fusion processing of grid-connected system and the information integration and recognition of distributed marine green energy grid-connected system based on big data analysis method, and improve the output performance of energy grid-connected system. This paper proposed a big data analysis method of distributed marine green energy resources grid-connected system based on closed-loop information fusion and auto correlation characteristic information mining. This method realized the big data closed-loop operation and maintenance management of grid-connected system, and built the big data information collection model of marine green energy resources grid-connected system, and reconstructs the feature space of the collected big data, and constructed the characteristic equation of fuzzy data closed-loop operation and maintenance management in convex spaces, and used the adaptive feature fusion method to achieve the auto correlation characteristics mining of big data operation and maintenance information, and improved the ability of information scheduling and information mining of distributed marine green energy resources grid-connected system. Simulation results show that using this method for the big data analysis of distributed marine green energy resources grid-connected system and using the multidimensional analysis technology of big data can improve the ability of information scheduling and information mining of distributed marine green energy resources grid-connected system, realizing the information optimization scheduling of grid-connected system. The output performance of grid connected system has been improved.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.