Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 12

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  Big Data
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
EN
Purpose: The publication presents the results of an analysis of the popularity of technologies used in logistics based on published technical literature. The aim of the work was to determine the participation of individual types of technologies in the development of Logistics 4.0. In the Industry 4.0 policy implemented in highly developed countries, logistics development is referred to as “Logistics 4.0”. Methodology: The work is based on the analysis of empirical data describing the topics of the application of the latest information technology and other technologies related to the fourth industrial revolution. The scope of the analysis covers technologies developed between 2014-2022. Findings: Based on the investigation, the major technological subfields of Big Data, Cloud computing and networking, Business Intelligence and other, Internet of Things, and Hardware have been proposed as the core utility categories of technologies in Logistics 4.0. Originality/value: The analysis can be useful for practical aims, e.g., while planning logistics 4.0 trainings, enterprising technical investments, but also for scientific and educational objectives.
EN
Data processing, artificial intelligence and IoT technologies are on the rise. The role of data transfer security systems and databases, known as Big Data, is growing. The main cognitive aim of the publication is to identify the specific nature of Big Data management in an enterprise. The paper uses the bibliographic Elsevier and Springer Link databases, and the Scopus abstract database. The distribution of keywords, drawing attention to four main areas related to research directions, is indicated, i.e., Big Data and the related terms „human”, „IoT” and „machine learning”. The paper presents the specific nature of Big Data together with Kitchin and McArdle’s research, indicating the need for a taxonomic ordering of large databases. The precise nature of Big Data management, including the use of advanced analytical techniques enabling managerial decision-making, was identified. The development of Cyber Production Systems (CPS), based on BD, integrating the physical world of an enterprise with the digitisation of information as the concept of Digital Twins (DTs), was also indicated. CPS offer the opportunity to increase enterprise resilience through increased adaptability, robustness and efficiency. With DTs, manufacturing costs are reduced, the product life cycle is shortened, and production quality increases.
PL
W zarządzaniu przedsiębiorstwem na popularności zyskuje m.in. przetwarzanie dużych zbiorów danych, zastosowanie sztucznej inteligencji i technologie IoT. Rośnie rola systemów bezpieczeństwa przesyłania danych oraz baz danych określanych jako Big Data. Głównym celem poznawczym publikacji jest identyfikacja specyfiki zarządzania Big Data w przedsiębiorstwie. W pracy wykorzystano bibliograficzne bazy danych Elsevier i Springer Link oraz bazę abstraktów Scopus. Wskazano rozkład słów kluczowych, zwracając uwagę na główne obszary dotyczące kierunków badań Big Data i związanych z nimi terminów „człowiek”, „IoT” oraz „uczenie maszynowe”. Artykuł przedstawia specyfikę Big Data, wskazuje m.in. badania Kitchina i McArdle’a opisujące potrzebę uporządkowania taksonomicznego dużych baz danych. W artykule zidentyfikowano charakter zarządzania Big Data, w tym wykorzystanie zaawansowanych technik analitycznych umożliwiających podejmowanie decyzji zarządczych. Wskazano również rozwój Cyber Production Systems (CPS), opartych na BD, integrujących fizyczny świat przedsiębiorstwa z digitalizacją informacji jako koncepcję Digital Twins (DTs). CPS oferują możliwość zwiększenia odporności przedsiębiorstwa poprzez zwiększoną zdolność adaptacji, solidność i wydajność. Dzięki DT koszty produkcji są zmniejszone, cykl życia produktu jest skrócony, a jakość produkcji wzrasta.
EN
Purpose: The main objective of this article is to identify areas for optimizing marketing communication via artificial intelligence solutions. Design/methodology/approach: In order to realise the assumptions made, an analysis and evaluation of exemplary implementations of AI systems in marketing communications was carried out. For the purpose of achieving the research objective, it was decided to choose the case study method. As part of the discussion, the considerations on the use of AI undertaken in world literature were analysed, as well as the analysis of three different practical projects. Findings: AI can contribute to the optimisation and personalisation of communication with the customer. Its application generates multifaceted benefits for both sides of the market exchange. Achieving them, however, requires a good understanding of this technology and the precise setting of objectives for its implementation. Research limitations/implications: The article contains a preliminary study. In the future it is planned to conduct additional quantitative and qualitative research. Practical implications: The conclusions of the study can serve to better understand the benefits of using artificial intelligence in communication with the consumer. The results of the research can be used both in market practice and also serve as an inspiration for further studies of this topic. Originality/value: The article reveals the specifics of artificial intelligence in relation to business activities and, in particular, communication with the buyer. The research used examples from business practice.
PL
Głównym celem niniejszego artykułu jest zaprezentowanie wpływu autorskich poprawek domyślnej konfiguracji na szybkość przetwarzania danych przez Apache NiFi. Dodatkowo zbadano jak skaluje się wydajność wraz ze wzrostem liczby węzłów w klastrze obliczeniowym. Uzyskane wyniki szczegółowo przeanalizowano pod kątem wydajności oraz wartości kluczowych wskaźników.
EN
The main purpose of this article is to present the impact of authors’ tweaks to the default configuration on the processing speed of Apache NiFi. Additionally, how the performance scales with the increasing number of nodes in a computing cluster has been examined. Achieved results were thoroughly analyzed in terms of performance and key indicators.
PL
W ostatnich latach coraz większą uwagę poświęca się zastosowaniu technologii Big Data, uczenia maszynowego oraz sztucznej inteligencji (artificial intelligence — AI). Przedsiębiorstwa dążą do przewagi konkurencyjnej poprzez odpowiednie zastosowanie analityki danych. Technologia Big Data może być wykorzystywana w wielu różnych branżach, np. w branży transportowej czy medycznej, a potencjalnie we wszystkich. Olbrzymim problem w łańcuchu logistycznym jest ryzyko opóźnień, na które może wpływać wiele czynników, m.in. nieczytelna etykieta na przesyłce, brak pracowników magazynowych czy kongestia w miastach. Artykuł koncentruje się na zastosowaniu technologii Big Data do wykrywania ryzyka opóźnień w łańcuchach dostaw produktów leczniczych. Jego celem jest przedstawienie koncepcji dużych zbiorów danych, architektury Big Data dla łańcucha dostaw produktów leczniczych oraz zaprezentowanie wyników badań związanych z predykcją ryzyka opóźnień dzięki implementacji tej architektury w rzeczywistym przedsiębiorstwie. Postawiony cel zdeterminował wybór następujących metod badawczych: analizy literatury oraz modelowania, które pozwoliło zaprojektować i wdrożyć architekturę dla łańcucha dostaw w badanym przedsiębiorstwie. W ostatniej części artykułu zaprezentowano model regresji logistycznej do przewidywania opóźnień w łańcuchu dostaw produktów leczniczych. W ramach badań ustalono, że model ma wysoką zdolność predykcyjną.
EN
In recent years, more and more attention has been paid to the use of Big data technology, machine learning and AI. Enterprises strive for a competitive advantage through the appropriate use of data analytics. Big data can be used in many different industries, e.g. in the transport or medical industry, and potentially in all of them. A huge problem in the supply chain is the risk of delay, which may be influenced by many factors, including illegible label on the package, lack of warehouse workers or congestion in cities. The article focuses on the use of Big Data technology to detect the risk of delays in the supply chains of medicinal products. Its purpose is to present the concept of Big Data, Big Data architecture for the drug supply and to present the results of research related to the prediction of the risk of delays in its implementation in a real enterprise. The set goal determined the choice of the following research methods: analysis of literature and the use of modeling, which allowed to design and implement the architecture for the drug supply chain to collect data in the studied enterprise. The last part of the article presents a logistic regression model for predicting delays in the supply chain of medicinal products. The research established that the model has a high predictive ability.
EN
Purpose: The aim of the article is to describe and forecast possible dilemmas related to the development of cognitive technologies and the progressing process of algorithmization of social life. Design/methodology/approach: Most of the current studies related to the Big Data phenomenon concern the level of efficiency improvement the algorithmic tools or protection against autonomization of machines, in this analysis a different perspective is proposed, namely - thoughtless way of using data-driven instruments, termed technological proof of equity. This study is to try to anticipate possible difficulties connected with algorithmization, which understanding could help to "prepare" or even eliminate the harmful effects we may face which will affect decisions made in the field of the social organization and managing organizations or cities etc. Findings: The proposed point of view may contribute to a more informed use of cognitive technologies, machine learning, artificial intelligence and an understanding of their impact on social life, especially unintended consequences. Social implications: The article can have an educational function, helps to develop critical thinking about cognitive technologies and directs attention to areas of knowledge by which future skills should be extended. Originality/value: The article is addressed to data scientist and all those who use algorithms and data-driven decision-making processes in their actions. Crucial in this considerations is the introduction the concept of technological proof of equity, which helps to "call" the real threat of the appearance of technologically grounded heuristic thinking and it’s social consequences.
EN
Background: This paper has the central aim to provide an analysis of increases of system complexity in the context of modern industrial information systems. An investigation and exploration of relevant theoretical frameworks is conducted and accumulates in the proposition of a set of hypotheses as an explanatory approach for a possible definition of system complexity based on information growth in industrial information systems. Several interconnected sources of technological information are investigated and explored in the given context in their functionality as information transferring agents, and their practical relevance is underlined by the application of the concepts of Big Data and cyber-physical, cyber-human and cyber-physical-cyber-human systems. Methods: A systematic review of relevant literature was conducted for this paper and in total 85 sources matching the scope of this article, in the form of academic journals and academic books of the mentioned academic fields, published between 2012 and 2019, were selected, individually read and reviewed by the authors and reduced by careful author selection to 17 key sources which served as the basis for theory synthesis. Results: Four hypotheses (H1-H4) concerning exponential surges of system complexity in industrial information systems are introduced. Furthermore, first foundational ideas for a possible approach to potentially describe, model and simulate complex industrial information systems based on network, agent-based approaches and the concept of Shannon entropy are introduced. Conclusion: Based on the introduced hypotheses it can be theoretically indicated that the amount information aggregated and transferred in a system can serve as an indicator for the development of system complexity and as a possible explanatory concept for the exponential surges of system complexity in industrial information systems.
EN
Recommender systems (RS) have emerged as a means of providing relevant content to users, whether in social networking, health, education, or elections. Furthermore, with the rapid development of cloud computing, Big Data, and the Internet of Things (IoT), the component of all this is that elections are controlled by open and accountable, neutral, and autonomous election management bodies. The use of technology in voting procedures can make them faster, more efficient, and less susceptible to security breaches. Technology can ensure the security of every vote, better and faster automatic counting and tallying, and much greater accuracy. The election data were combined by different websites and applications. In addition, it was interpreted using many recommendation algorithms such as Machine Learning Algorithms, Vector Representation Algorithms, Latent Factor Model Algorithms, and Neighbourhood Methods and shared with the election management bodies to provide appropriate recommendations. In this paper, we conduct a comparative study of the algorithms applied in the recommendations of Big Data architectures. The results show us that the K-NN model works best with an accuracy of 96%. In addition, we provided the best recommendation system is the hybrid recommendation combined by content-based filtering and collaborative filtering uses similarities between users and items.
EN
Digitalization is currently the key factor for progress, with a rising need for storing, collecting, and processing large amounts of data. In this context, NoSQL databases have become a popular storage solution, each specialized on a specific type of data. Next to that, the multi-model approach is designed to combine benefits from different types of databases, supporting several models for data. Despite its versatility, a multi-model database might not always be the best option, due to the risk of worse performance comparing to the single-model variants. It is hence crucial for software engineers to have access to benchmarks comparing the performance of multi-model and single-model variants. Moreover, in the current Big Data era, it is important to have cluster infrastructure considered within the benchmarks. In this paper, we aim to examine how the multi-model approach performs compared to its single-model variants. To this end, we compare the OrientDB multi-model database with the Neo4j graph database and the MongoDB document store. We do so in the cluster setup, to enhance state of the art in database benchmarks, which is not yet giving much insight into cluster-operating database performance.
EN
Vehicle vibrations caused by poor haul road conditions create multiple negative effects for mines, including slower cycle times, increased maintenance, and operator injury. Vibration levels in vehicles result in part from road roughness. Mine roads are mainly constructed from in-pit materials that are more likely to deteriorate overtime and require frequent maintenance to maintain a smooth surface. The decision for when and where road maintenance is conducted is primarily based on visual inspections. This method can provide subjective, inaccurate, and delayed response to adverse conditions. The recent increase in vehicle telemetry data allows instant access to several types of data; mainly being used for haul fleet dispatching, collision avoidance, and geologic surveying, telemetry data has yet to see widespread use in road maintenance dispatching. This paper examines current road roughness characterization techniques and current telemetry data streams. An initial case study was conducted using vibration and Global Navigation Satellite System (GNSS) telemetry data to determine road roughness. Data from three haul trucks under normal operating conditions were collected over the course of a week. The results of this case study demonstrate localized vibration levels can be used to objectively identify rough roads. This can be further developed to dispatch road maintenance crews leading to overall reduced mining costs and increased operator health. The researches propose continuing to full scale test using data from an entire fleet and longer timeframe.
PL
Tematem referatu była istota stosowania wspomagania diagnostycznego, wprowadzonego do tej pory jedynie w 13% przedsiębiorstw transportowych [1]. Jako, że jest to rozwiązanie zakładające zbieranie, analizę i przesyłanie informacji między jednym urządzeniem a drugim, omówiona została koncepcja „Industry 4.0”, z której się ono wywodzi. Celem referatu jest zgłębienie telematyki, ukazanie korzyści jej wdrażania dzięki zastosowaniu analizy SWOT i przedstawieniu narzędzi proponowanych przez producentów.
EN
The subject of this paper was an essence of using diagnostic support, which was introduced only in 13% of transport enterprises so far. The concept of „Industry 4.0” was also discussed, as it is the origin of diagnostic support, which is about collecting, analysing and transferring information between one device and another. The aim of the paper is to study telematics, show the benefits of its implementation thanks to the use of SWOT analysis and presentation of tools proposed by producers.
PL
Tematem referatu jest zastosowanie logistyki wyprzedzającej w zarządzaniu przedsiębiorstwami, w szczególności w branży e-commerce. Wstęp traktuje o postępie technologicznym, który określony został mianem Przemysłu 4.0. Następnie wyjaśnione zostało pojęcie logistyki wyprzedzającej, oraz narzędzi które służą do jej skutecznego wprowadzenia. Kolejno zostały przytoczone przykłady firm, stosujących opisane rozwiązania. Na końcu znajduje się analiza SWOT wprowadzenia logistyki wyprzedzającej w przedsiębiorstwie.
EN
The subject of this paper is the use of anticipatory logistics in company management, particurarly in e-commerce industry. The introduction treats about technological development known as the Industry 4.0. Next, the concepts of anticipatory logistics and tools used for its effective implementation are explained. Subsequently, examples of companies using the given solutions are adduced. Finally, there is a SWOT analysis, which describes introducing anticipatory logistics in a company.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.