Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 201

Liczba wyników na stronie
first rewind previous Strona / 11 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  big data
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 11 next fast forward last
EN
Purpose: The aim of the article is to review the level of advancement of linked open data (LOD) concept in public institutions, based on the example of Lower Silesia (Poland) organizations. Moreover, this paper considers the level of advancement of Lower Silesia institutions on the famous Tim Berners Lee’s scale and compares the obtained results. Design/methodology/approach: case study of important public institutions of Lower Silesia region, and the assessment of LOD concept advancement, based on 5-star Tim Berners Lee’s scale an short expert interviews. Findings: We can observe considerable interest and willingness to create a network of linked open data, which is visible in the growth of the number of data sets and the ever-expanding structure of the LOD cloud. Implementation of LOD in public institutions can be really helpful in management and decision-making processes. Public entities in Lower Silesia (Poland) should continue to develop their network to reach the highest level of advancement of LOD concept, especially in the context of integration with other data sets. Research limitations/implications: The limitation of the research was the fact that not all public institutions are still familiar with the concept of linked open data, or do not use it to its full extent. Practical implications: In the context of public institutions, LOD can play a key role in improving transparency, efficiency, and data-driven decision-making. Users can freely access information that is crucial to them and use it for interesting social or commercial projects, as well as individual ones. Social implications: The practical implementation of LOD is also related to its social impact, everything depends on the type of data that is made available to users. Very often, they are related to administration, public transport, budget management of smaller and larger communities or health care, what can really contribute to improving the quality of life. Originality/value: For the first time, the level of advancement of the linked open data concept in Polish public institutions was evaluated, which may improve the results in institutions already using this idea but also encourage them to develop the network of linked data resources. Keywords:
2
Content available Optimizing AIS Data Format Based on HELCOM Datasets
EN
Automatic Identification System (AIS) data plays a vital role in a wide range of maritime research areas, including logistics optimization, navigational safety analysis, economic activity monitoring, and environmental impact assessment. The HELCOM (Helsinki Commission) organization collects and maintains extensive AIS data for the Baltic Sea region, offering researchers valuable insights into vessel movement and marine traffic patterns. However, the raw AIS data (typically provided in CSV plaintext format) is often large and inefficient to store due to a) plain-text redundancy, b) high levels of duplication and repetitive information. For effective storage and transmission, AIS data is usually compressed as it is, using widely used compression tools (e.g. zip archive). In this study, we investigate techniques for optimizing the storage of HELCOM AIS data by manipulations of data format and structure. Our research reveals that after the undertaken steps, the size of the uncompressed dataset decreased by approx. 60%; the compressed dataset size decreased by approx. 90% compared to the original, revealing the potential for substantial storage savings. To further improve data handling, we experimented with various structural optimizations of the CSV format, including data arranging by core attributes, column ordering optimization, dataset normalization involving the segregation of mutable and immutable parts. For example, vessel-specific attributes such as ship name, MMSI (Maritime Mobile Service Identity) code, IMO (International Maritime Organization), origin, and dimensions, which stay the same across records for a vessel, can be moved into a separate file during normalization, which significantly reduces the dataset size. The article compares several AIS data persisting strategies to identify the most memory-efficient approaches. Furthermore, we introduce a data generation tool that produces synthetic AIS datasets in customizable formats and patterns. This tool enables reproducibility of the study and supports further experimentation with AIS data optimization approaches.
EN
Objectives: This study aims to develop an AI-based early warning system for maritime navigation by integrating machine learning techniques to predict weather conditions and assess navigation risks. The research focuses on improving forecasting accuracy for key meteorological and oceanographic variables to enhance navigational safety. Theoretical Framework: The study is grounded in predictive analytics and artificial intelligence applications in maritime risk assessment. It leverages machine learning models, including ARIMA, Random Forest, SVM, and Artificial Neural Networks, to enhance the accuracy of weather and sea condition forecasts, providing valuable insights for maritime operations. Method: The research employs a data-driven approach, utilizing historical meteorological and oceanographic data to train and evaluate machine learning models. Variables such as air temperature, wind speed, sea temperature, rainfall, and air pressure are analyzed using regression, time-series analysis, and statistical modeling techniques to develop an effective predictive system. Results and Discussion: The findings reveal that AI models, particularly ARIMA and regression analysis, demonstrate high predictive capability for air temperature variations. However, dataset limitations and model parameter tuning impact accuracy. The results highlight the importance of selecting appropriate variables and optimizing model structures to improve forecasting reliability. Research Implications: The study contributes to maritime safety by providing a framework for real-time weather forecasting and risk assessment. The findings can inform decision-making in vessel operations and policy development for maritime safety regulations. Originality/Value: This research integrates AI and predictive analytics to enhance maritime navigation safety, addressing gaps in real-time risk assessment and forecasting. The proposed framework provides a foundation for further advancements in AI-driven maritime decision support systems.
EN
Artificial Intelligence (AI) can be simply approached as the (effective) simulation of human intelligence processes by computer systems. The issue of Maritime Autonomous Surface Ships (MASS), based on support by numerous AI applications, is providing a quite disruptive picture of how the shipping industry may be transformed in the future. After the necessary clarification of terms, a summary of certain important legal developments in relation to the on-going introduction of MASS type vessels into full service is provided. The role of trustworthy AI applications that can reliably serve the associated decision-making tasks is also discussed. In the near future, the vast majority of maritime transport needs will continue to be served by those vessels termed as “conventional” (regularly manned ships); the shipping industry is well known for its risk adverse behaviour and a slow pace of adaptation towards this new operating paradigm is the most probable path of adoption.
5
Content available Innovative AI tools in Renewable Energy Sources
EN
The article analyzes the role of artificial intelligence (AI) in the renewable energy sources (RES) sector, highlighting its importance in optimizing energy production, distribution, and storage processes. AI enables precise forecasting of energy production, minimizing the effects of weather instability and increasing the operational efficiency of renewable energy systems by up to 25%. AI-based tools also allow for dynamic adjustment of wind turbines and photovoltaic panels, which reduces energy losses and operating costs. An important application of AI is predictive maintenance, which reduces failures through early detection of faults. Smart grid management enables the optimal use of renewable energy sources by analyzing demand and supply and integrating different energy storage technologies. AI also supports the planning of renewable energy investments, helping to select optimal locations for wind and solar farms. However, the implementation of AI in the energy sector faces challenges, such as the need for access to large data sets, the cost of integration with existing systems, and cybersecurity issues. Despite these barriers, the future of AI in RES looks promising, especially in the context of its integration with IoT, big data and quantum technologies. With the right technological and regulatory support, AI can become a key element of the global energy transition, increasing the stability and profitability of renewables and supporting the fight against climate change.
EN
In data mining, one of the most studied problems is outlier detection, which involves identifying “unusual” data points within a dataset suspected to be generated by a different mechanism than the rest of the dataset. Outlier detection has applications in discovering novel information, detecting bank fraud, identifying system intrusions, and others. However, handling large volumes of data, known as big data, poses a challenge to outlier detection algorithms because the resources of a single computer may not be sufficient to achieve efficient performance. Furthermore, datasets are often stored in distributed environments. The goal of this work is to develop a new distributed outlier detection algorithm based on the solution of the support vector data description using the alternating direction method of multipliers. Mathematical optimization methods and Python language libraries are mainly used for the implementation. As a result, the design and distributed implementation of the proposed algorithm are achieved, which are validated using several test datasets, yielding satisfactory and competitive results compared to existing methods.
EN
The evolutionary deep learning algorithm EvoDN2 is an emerging strategy for data-driven intelligent learning and many-objective optimisation capable of handling a large volume of noisy and non-linear data. This article provides the essential details of this algorithm and highlights a number of its recent applications.
EN
The publication presents potential places of conscious or unconscious interference or degradation of data collected using a distributed system of sensors and/or other IoT devices. Places of possible interference in the measurement data are indicated from the very beginning of the measurement chain – from a single measurement element, through data conversion, transmission, processing, storage, analysis to interpretation. Threats are indicated not only of technical but also economic nature. Examples are presented that are intended to show how the flow of data can affect decision-making and how the lack of knowledge about the measurement context can affect their interpretation. Possible anomaly detection mechanisms are also indicated, considering new, developing techniques.
EN
Today, traffic accidents are still a difficult and urgent problem for many countries around the world. Traffic accidents on highways are often more serious than accidents on urban roads. Therefore, disseminating emergency information and creating immediate connections with road users is key to rescuing passengers and reducing congestion. Thus, this study applies data fusion and data mining techniques to analyze travel time and valuable information about traffic accidents based on the real-time data collected from On-Board Unit installed in vehicles. The results show that this important information is the vital database to analyze traffic conditions and safety factors, thereby developing a smart traffic information platform. This result enables traffic managers to provide real-time traffic information or forecasts of congestion and traffic accidents to road users. This helps limit congestion and serious accidents on the Highway.
PL
Dzisiaj dane to główny zasób ludzkości. Każda era ma swoje zasoby; kiedyś ludzie gromadzili skóry, bydło, potem węgiel czy stal. Dzisiaj siłą każdej firmy, koncernu czy państwa jest zdolność do szybkiego analizowania gigantycznych ilości danych. Dane te muszą być przeanalizowane tu i teraz, natychmiast, ponieważ za kilka godzin informacje zawarte w źródłach danych nie będą miały już żadnej wartości.
PL
Artykuł omawia ich zastosowania w monitorowaniu zużycia energii, zarządzaniu siecią, integracji odnawialnych źródeł energii i predykcji awarii. Przedstawia technologie i narzędzia, takie jak platformy analityczne, algorytmy predykcyjne, IoT, chmura obliczeniowa, bazy NoSQL, narzędzia wizualizacyjne i sztuczna inteligencja. Analizuje wyzwania wdrożenia Big Data, jak integracja danych, bezpieczeństwo, inwestycje w IT i brak wykwalifikowanej kadry. Korzyści przewyższają bariery, prowadząc do efektywniejszego i zrównoważonego zarządzania energią.
EN
In the context of Kazakhstan’s economic digitalisation, increasing economic efficiency is a top priority. Digitalisation enhances enterprises’ financial stability and decision-making speed. This is particularly vital for mining enterprises, a key focus of the “Digital Kazakhstan” state program. This study aims to develop strategies to boost economic efficiency by analysing its essence and evaluating mining enterprises in East Kazakhstan. The methods used in the research include statistical analysis, comparison, structural and logical analysis, and synthesis. The results include determining the essence of economic efficiency, evaluating the dynamics of industrial production indices, production volume, and structure, and assessing economic efficiency indicators of mining enterprises. Five key areas affecting economic efficiency were identified: technology, material resources, management, labour resources, and the general system. The introduction of Big Data digital technology is suggested for each area to significantly enhance efficiency.
EN
The world is in the grip of a very hard challenge these last few years: the conservation of the environment. To reduce waste, we are initiating actions and recycling methods. Our study will focus on the automotive sector, which generates different types of waste, recyclable and non-recyclable. The article explores the innovative integration of product lifecycle management (PLM) from the beginning of life (BOL) to the end of life (EOL) stages, with the goal of creating a comprehensive recycling process. The automotive sector serves as a compelling case study to showcase the practical application of this holistic approach. The study illustrates how aligning BOLandEOLinPLMcanlead to sustainable practices in the automotive industry. The results reveal a remarkable synergy between designing eco-friendly products, efficient manufacturing, and responsible disposal. The article emphasizes the significant environmental and economic benefits of optimizing the entire product lifecycle by connecting these stages. The article presents an automated model embedded within the PLM tool as a notable outcome, reflecting the combined process. The automated model embodies a futuristic vision that seamlessly integrates sustainable practices into product development and management, highlighting the immense potential for industries to contribute to a greener and more sustainable future.
PL
Sztuczna inteligencja, uczenie maszynowe i analiza big data odegrają kluczową rolę w transformacji energetyki. Dzięki tym zaawansowanym technologiom możliwe jest zwiększenie efektywności, promowanie zrównoważonego rozwoju i lepsze zarządzanie zasobami energetycznymi.
EN
The paper describes methods and process of creating a digital twin of the city on case study of Bratislava called UrbanGraphica. The process is divided into three stages. First stage includes collection of data and modelling of the digital model of the city. The second stage integrates a wide range of information layers from various sources into the model. These collected information layers include data on traffic, vegetation, noise, solar irradiation, shadows, key viewpoints and temperature. In the third stage, these diverse datasets are overlaid to enable a comprehensive scoring system aimed at quantitatively assessing the quality of public spaces. Subsequential validation of this quantitative assessment is based on the comparison with maps of public sentiment, which were obtained from city inhabitants through questionnaires available as open data. This comparative analysis may reveal correlations between the physical and social parameters of the city. Furthermore, these integrated datasets enable the development of advanced machine learning models capable of predicting the popularity of public spaces based on their measurable characteristics. These predictive models are possible to be used to evaluate and refine the design of future public spaces during the planning stages, thereby improving decision-making processes. Additionally, the created digital twin is also utilized for estimating the potential for solar and wind energy production and utilization, thus supporting sustainable development goals of the city. The created digital twin was already published as physical model and as online digital model. Collected data from various sources into one platform provides more comprehensive image of the city. Moreover, utilization of data analytics and machine learning leads to more responsive and sustainable urban environments, contributing to the wellbeing of the city inhabitants
PL
W artykule opisano metody i proces tworzenia cyfrowego bliźniaka miasta na przykładzie Bratysławy o nazwie UrbanGraphica. Proces jest podzielony na trzy etapy. Pierwszy etap obejmuje gromadzenie danych i modelowanie cyfrowego modelu miasta. Drugi etap integruje szeroki zakres warstw informacyjnych z różnych źródeł w modelu. Te zebrane informacje obejmują dane o ruchu drogowym, roślinności, hałasie, promieniowaniu słonecznym, cieniach, kluczowych punktach widokowych i temperaturze. W trzecim etapie, te różnorodne zbiory danych są nakładane na siebie, aby umożliwić kompleksowy system punktacji mający na celu ilościową ocenę jakości przestrzeni publicznych. Późniejsza walidacja tej oceny ilościowej opiera się na porównaniu z mapami nastrojów społecznych, które uzyskano od mieszkańców miasta. Ta analiza porównawcza może ujawnić korelacje między fizycznymi i społecznymi parametrami miasta. Ponadto te zintegrowane zbiory danych umożliwiają rozwój zaawansowanych modeli uczenia maszynowego zdolnych do przewidywania popularności przestrzeni publicznych na podstawie ich mierzalnych cech. Te modele predykcyjne mogą być wykorzystywane do oceny i udoskonalania projektów przyszłych przestrzeni publicznych na etapach planowania, usprawniając w ten sposób procesy decyzyjne. Dodatkowo, stworzony cyfrowy bliźniak jest również wykorzystywany do szacowania potencjału produkcji i wykorzystania energii słonecznej i wiatrowej, wspierając w ten sposób cele zrównoważonego rozwoju miasta.
EN
The article considers an approach to implementing the architecture of a microservice system for processing large volumes of data basedon the event-oriented approach to managing the sequence of using individual microservices. This becomes especially important when processing large volumes of data from information sources with different performance levels when the task is to minimize the total time for processing data streams.In this case, as a rule, the task is to minimize the number of requests for information sources to obtain a sufficient amount of data relevant to therequest. The efficiency of the entire software system as a whole depends on how the microservices that provide extraction and primary processing of the received data are managed. To obtainthe required amount of relevant data from diverse information sources, the software system must adapt tothe request during its operation so that the maximum number of requests are directed to sources that have the maximum probability of finding thedata necessaryfor the request in them. An approach is proposed that allows adaptively managing the choice of microservices during data collection and by emerging events and, thus, forming a choice of information sources based on an assessment of the efficiencyof obtaining relevant information from these sources. Events are generated as a result of data extraction and primary processing from certain sources in terms of assessing the availability of data relevantto the request in each of the sources considered within the framework of the selected search scenario. Event-oriented microservice architecture adaptsthe system operation to the current loads on individual microservices and the overall performance by analysethe relevant events. The use of an adaptive event-oriented microservice architecture can be especially effective in the development of various information and analytical systems constructedbyreal-time data collectionand design scenarios of analytical activity. The article considers the features ofsynchronous and asynchronous optionsin the implementation of event-oriented architecture, which can be used in various software systems depending on their purpose. An analysisof the features of synchronous and asynchronous options in the implementation of event-oriented architecture, their quantitative parameters, and features of their use depending on the type of tasks is carried out.
PL
W artykule rozważono podejście do implementacji architektury systemu mikrousług do przetwarzania dużych ilości danych w oparciuo podejście zorientowane na zdarzenia do zarządzania sekwencją korzystania z poszczególnych mikrousług. Staje się to szczególnie ważne podczas przetwarzania dużych ilości danych ze źródeł informacji o różnych poziomach wydajności, gdy zadaniem jest zminimalizowanie całkowitego czasu przetwarzania strumieni danych. W tym przypadku, co do zasady, zadaniem jest zminimalizowanie liczby żądań do źródeł informacji w celu uzyskania wystarczającej ilości danych istotnych dla żądania. Wydajność całego systemu oprogramowania jako całości zależy od sposobu zarządzania mikrousługami, które zapewniają ekstrakcję i podstawowe przetwarzanie otrzymanych danych. Aby uzyskać wymaganą ilość odpowiednich danychz różnych źródeł informacji, system oprogramowania musi dostosować się do żądania podczas jego działania, tak aby maksymalna liczba żądańbyła kierowana do źródeł, które mają maksymalne prawdopodobieństwo znalezienia w nich danych niezbędnych do żądania. Zaproponowano podejście, które pozwala adaptacyjnie zarządzać wyborem mikrousług podczas gromadzenia danych i pojawiających się zdarzeń, a tym samym kształtować wybór źródeł informacji w oparciu o ocenę skuteczności uzyskiwania odpowiednich informacji z tych źródeł. Zdarzenia są generowane wwyniku ekstrakcji danych i przetwarzania pierwotnego z określonych źródeł w zakresie oceny dostępności danych istotnych dla żądania w każdym ze źródeł uwzględnionych w ramach wybranego scenariusza wyszukiwania. Architektura mikrousług zorientowana na zdarzenia dostosowuje działanie systemu do bieżących obciążeń poszczególnych mikrousług i ogólnej wydajności poprzez analizę odpowiednich zdarzeń. Wykorzystanie adaptacyjnej architektury mikrousługzorientowanej na zdarzenia może być szczególnie skuteczne w rozwoju różnych systemów informacyjnych i analitycznych zbudowanych w oparciuo gromadzenie danych w czasie rzeczywistym i projektowanie scenariuszy działalności analitycznej. W artykule rozważono cechy opcji synchronicznychi asynchronicznych w implementacji architektury zorientowanej na zdarzenia, które mogą być wykorzystywane w różnych systemach oprogramowaniaw zależności od ich przeznaczenia. Przeprowadzono analizę cech opcji synchronicznych i asynchronicznych w implementacji architektury zorientowanej na zdarzenia, ich parametrów ilościowych oraz cech ich wykorzystania w zależności od rodzaju zadań.
17
Content available A study of big data in cloud computing
EN
Over the last two decades, the size and amount of data has increased enormously, whichhas changed traditional methods of data management and introduced two new technolog-ical terms: big data and cloud computing. Addressing big data, characterized by massivevolume, high velocity and variety, is quite challenging as it requires large computationalinfrastructure to store, process and analyze it. A reliable technique to carry out sophisti-cated and enormous data processing has emerged in the form of cloud computing becauseit eliminates the need to manage advanced hardware and software, and offers various ser-vices to users. Presently, big data and cloud computing are gaining significant interestamong academia as well as in industrial research. In this review, we introduce variouscharacteristics, applications and challenges of big data and cloud computing. We providea brief overview of different platforms that are available to handle big data, including theircritical analysis based on different parameters. We also discuss the correlation betweenbig data and cloud computing. We focus on the life cycle of big data and its vital analysisapplications in various fields and domains At the end, we present the open research issuesthat still need to be addressed and give some pointers to future scholars in the fields ofbig data and cloud computing.
EN
This paper analyzes the gastronomic service infrastructure of Warsaw using a proprietary algorithm based on big data, incorporating qualitative data such as user reviews and current price levels. It visualizes the quality of the service network, gaps, and dysfunctions in the form of pixel maps at city scale, providing new insights for urban planning. Assuming a networked society as the baseline social structure and the shared economy and platform capitalism as new economic models, the author proposes an analytical framework that divides the city into pixels with a length of 1200 meters, representing a 15-minute walking distance. The proposed analytical structure, in the form of a pixel/node matrix, responds to the emergence of platform urbanism. Under these assumptions, the gastronomic services of Warsaw were analyzed using a proprietary algorithm written in Python, utilizing big data from the Google Maps API. The research parameters for the nodes include variation, quality extrapolated from user ratings, and the price level (accessibility), referring to Rahman's analyses of digital power. The study was conducted for the keyword 'restaurant' in January 2024. The tool allows for the acquisition and visualization of data on the current state of the city service infrastructure and draws conclusions by overlaying the results on a conventional map. Further studies have also been conducted as part of the authors PhD. The tool is transferable and scalable, allowing research on any city based on given keywords, drawing both quantitative and qualitative data, which is a distinctive feature of the study.
PL
Celem artykułu była analiza infrastruktury gastronomicznej Warszawy przy użyciu autorskiego algorytmu, opartego na dużych zbiorach danych, uwzględniającego dane jakościowe – opinie oraz aktualne poziomy cenowe. Algorytm pozwala na wizualizację obecnego stanu jakości usług, luk w tkance oraz dysfunkcji w formie pikselowych map o dowolnym zasięgu, generując nowy wkład w wiedzę o usługach w kontekście planowania urbanistycznego miast. Zakładając społeczeństwo sieciowe jako wyjściową strukturę społeczną oraz ekonomię współdzieloną i kapitalizm platformowy jako nowe modele ekonomiczne, autor proponuje metodę analityczną dzielącą miasto na piksele o boku 1200 metrów, odzwierciedlające w uproszczeniu 15-minutowy spacer. Zaproponowana struktura analityczna, w postaci macierzy pikseli/węzłów odpowiada na emergentne zjawisko urbanistyki platformowej. W myśl tych założeń dokonano analizy tkanki usług gastronomicznych Warszawy za pomocą autorskiego algorytmu napisanego w języku programistycznym Python, korzystając z dużych zbiorów danych platformy Google Maps, poprzez API. Parametry badawcze węzłów to zróżnicowanie (wariacja), jakość ekstrapolowana z ocen użytkowników (rating) oraz grupa cenowa (dostępność), nawiązując do analiz władzy cyfrowej Rahmana. Badania przeprowadzono dla słowa kluczowego ‘restaurant’ w styczniu 2024 roku. Narzędzie umożliwia pozyskanie i zobrazowanie danych o obecnym stanie infrastruktury usługowej miasta oraz wyciąganie wniosków poprzez nałożenie wyniku na podkład mapowy. Autor wykonał również szersze badania z wykorzystaniem algorytmu w ramach swojej pracy doktorskiej. Narzędzie jest transferowalne i skalowalne, pozwala na badania dowolnych miast na podstawie zadanych słów kluczowych, czerpiąc zarówno dane ilościowe jak i jakościowe, co stanowi wyróżnik badań.
EN
The COVD-19 pandemic has changed the mobility patterns of city dwellers worldwide. These changes apply to the number of trips made, their durations and directions as well as transport modes chosen for travelling purposes. In general, although the number of trips decreased, the use of cars increased and that of public transport declined. These mobility changes were induced by the fear of travelling in crowded vehicles and the extent of restrictions introduced by the governments. The effects of such changes are hard to assess and their evaluation is a complex issue. Based on data available about the transportation system in Warsaw and analysis of Big Data (comprising SIM card movements, acquired from mobile phone network operators), a research project has been carried out under the “IDUB against COVID-19” programme. Transportation models had been built which enabled estimation of the number of trips made at each stage of the pandemic in the spring 2020 and identification of differences through comparison with the models developed for the pre-pandemic conditions (year 2019). The calculations enabled assessment of the social costs of the pandemic associated with the urban transportation system, brought about mostly by changes in using private and public transport modes. The cost efficiency of public transport decreased as a result of limits on the number of passengers per vehicle introduced by transport authorities.
PL
Pandemia Covid-19 wywołała wiele zmian w funkcjonowaniu gospodarki oraz życia społecznego. Dotyczyło to w dużej mierze transportu, zarówno indywidualnego, jak i zbiorowego. Zmieniły się wzorce mobilności mieszkańców miast na całym świecie. Zmiany te dotyczyły liczby odbytych przejazdów, czasu ich trwania i kierunków, a także wybieranych środków transportu. Zmiany mobilności były powodowane strachem przed podróżowaniem w zatłoczonych pojazdach i zakresem ograniczeń wprowadzonych przez rządy. Skutki takich zmian są trudne do oszacowania, a ich ocena jest zagadnieniem złożonym. W ramach pracy badawczej „Method of assessing the social impact of changes in personal mobility in an epidemic state together with tools to support transport management” będącej częścią programu „IDUB against COVID” realizowanego w Politechnice Warszawskiej podjęto próbę zbudowania modeli obliczeniowych odwzorowujących zachowania transportowe sprzed i w czasie pandemii. To z kolei umożliwiło oszacowanie wielkości zmian kosztów społecznych pandemii, przy zastosowaniu standardowej metodyki wykorzystywanej w analizach kosztów i korzyści społecznych (AKK), z dostosowaniem jej do charakteru i zakresu danych, którymi dysponowano. Porównano dwa okresy funkcjonowania systemu transportowego: rok 2019, traktując go jako okres odniesienia przed pandemią, z okresem pandemii, wiosną 2020 (sytuacja po wprowadzeniu obostrzeń). Ze względu na konieczność badania porównywalnych sytuacji, w analizach obu okresów opracowano multimodalne (ruch samochodowy i transport zbiorowy) modele podróży dla Warszawy dla godziny szczytu porannego. Tworząc modele posługiwano się dostępnymi danymi o sieci transportowej oraz pomiarami ruchu, uzupełnianymi danymi o przemieszczeniach kart SIM, pozyskanymi od operatora telefonii komórkowej. Obliczenia pozwoliły ocenić koszty społeczne pandemii związane z systemem transportu miejskiego, spowodowane głównie zmianami w korzystaniu z transportu prywatnego i publicznego. Przeprowadzone analizy potwierdziły, że w efekcie wprowadzonych obostrzeń w mobilności oraz zmian zachowań komunikacyjnych nastąpiło ogólne zmniejszenie liczby podróży, co w konsekwencji doprowadziło do zmniejszenia prac przewozowych mierzonych w pojazdo-km i pojazdo-godz. Zmiany te obliczano jako różnice pomiędzy fazą pandemii a okresem odniesienia (ten sam okres rok wcześniej). Umożliwiło to wyznaczenie korzyści użytkowników stanowiących redukcję kosztów: czasu pasażerów, eksploatacji pojazdów, wypadków, zanieczyszczenia powietrza, zmian klimatycznych i oddziaływania hałasu.
20
Content available Secure Big Data Model Based on Blockchain Technology
EN
Blockchain has been growing rapidly in the cryptocurrency age and is one of the best information technologies that provide security and privacy to the data of people in crypto economy. In most cases, tampering with data and problems regarding data authentication tend to occur when data is shared and stored on centralized servers. With the assistance of blockchain technology, big data can be managed and saved in the cloud, and the technologies that enhance security by keeping out pernicious users could be used. Therefore, this paper has two aims: to discover the advantages and disadvantages of existing security big data models and to develop a conceptual secure big data model based on blockchain technology. The design science method is used for the purposes of this study. The developed conceptual secure big data model consists of three main processes: dataset storage and encryption, verification and consensus, and access control mechanism. The finding of this study discovered that the developed conceptual secure big data model offers a mix of both traditional and modern security measures which helps domain practitioners understand the security concepts of the blockchain along with big data as well.
first rewind previous Strona / 11 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.