Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 162

Liczba wyników na stronie
first rewind previous Strona / 9 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  ontology
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 9 next fast forward last
EN
The article considers an approach to implementing the architecture of a microservice system for processing large volumes of data basedon the event-oriented approach to managing the sequence of using individual microservices. This becomes especially important when processing large volumes of data from information sources with different performance levels when the task is to minimize the total time for processing data streams.In this case, as a rule, the task is to minimize the number of requests for information sources to obtain a sufficient amount of data relevant to therequest. The efficiency of the entire software system as a whole depends on how the microservices that provide extraction and primary processing of the received data are managed. To obtainthe required amount of relevant data from diverse information sources, the software system must adapt tothe request during its operation so that the maximum number of requests are directed to sources that have the maximum probability of finding thedata necessaryfor the request in them. An approach is proposed that allows adaptively managing the choice of microservices during data collection and by emerging events and, thus, forming a choice of information sources based on an assessment of the efficiencyof obtaining relevant information from these sources. Events are generated as a result of data extraction and primary processing from certain sources in terms of assessing the availability of data relevantto the request in each of the sources considered within the framework of the selected search scenario. Event-oriented microservice architecture adaptsthe system operation to the current loads on individual microservices and the overall performance by analysethe relevant events. The use of an adaptive event-oriented microservice architecture can be especially effective in the development of various information and analytical systems constructedbyreal-time data collectionand design scenarios of analytical activity. The article considers the features ofsynchronous and asynchronous optionsin the implementation of event-oriented architecture, which can be used in various software systems depending on their purpose. An analysisof the features of synchronous and asynchronous options in the implementation of event-oriented architecture, their quantitative parameters, and features of their use depending on the type of tasks is carried out.
PL
W artykule rozważono podejście do implementacji architektury systemu mikrousług do przetwarzania dużych ilości danych w oparciuo podejście zorientowane na zdarzenia do zarządzania sekwencją korzystania z poszczególnych mikrousług. Staje się to szczególnie ważne podczas przetwarzania dużych ilości danych ze źródeł informacji o różnych poziomach wydajności, gdy zadaniem jest zminimalizowanie całkowitego czasu przetwarzania strumieni danych. W tym przypadku, co do zasady, zadaniem jest zminimalizowanie liczby żądań do źródeł informacji w celu uzyskania wystarczającej ilości danych istotnych dla żądania. Wydajność całego systemu oprogramowania jako całości zależy od sposobu zarządzania mikrousługami, które zapewniają ekstrakcję i podstawowe przetwarzanie otrzymanych danych. Aby uzyskać wymaganą ilość odpowiednich danychz różnych źródeł informacji, system oprogramowania musi dostosować się do żądania podczas jego działania, tak aby maksymalna liczba żądańbyła kierowana do źródeł, które mają maksymalne prawdopodobieństwo znalezienia w nich danych niezbędnych do żądania. Zaproponowano podejście, które pozwala adaptacyjnie zarządzać wyborem mikrousług podczas gromadzenia danych i pojawiających się zdarzeń, a tym samym kształtować wybór źródeł informacji w oparciu o ocenę skuteczności uzyskiwania odpowiednich informacji z tych źródeł. Zdarzenia są generowane wwyniku ekstrakcji danych i przetwarzania pierwotnego z określonych źródeł w zakresie oceny dostępności danych istotnych dla żądania w każdym ze źródeł uwzględnionych w ramach wybranego scenariusza wyszukiwania. Architektura mikrousług zorientowana na zdarzenia dostosowuje działanie systemu do bieżących obciążeń poszczególnych mikrousług i ogólnej wydajności poprzez analizę odpowiednich zdarzeń. Wykorzystanie adaptacyjnej architektury mikrousługzorientowanej na zdarzenia może być szczególnie skuteczne w rozwoju różnych systemów informacyjnych i analitycznych zbudowanych w oparciuo gromadzenie danych w czasie rzeczywistym i projektowanie scenariuszy działalności analitycznej. W artykule rozważono cechy opcji synchronicznychi asynchronicznych w implementacji architektury zorientowanej na zdarzenia, które mogą być wykorzystywane w różnych systemach oprogramowaniaw zależności od ich przeznaczenia. Przeprowadzono analizę cech opcji synchronicznych i asynchronicznych w implementacji architektury zorientowanej na zdarzenia, ich parametrów ilościowych oraz cech ich wykorzystania w zależności od rodzaju zadań.
EN
This research article discusses a new paradigm in smart system development using the 4-layer framework and activity theory from the perspectives of ontology, epistemology, and axiology. The study aims to understand how this paradigm can influence the development of smart systems and provide insights into its theoretical and practical implications. The 4-layer modern system comprises instrumentation, information systems, business intelligence, and gamification, which are the core components of a smart system. Each layer plays a crucial role in data collection, information processing, business analysis, and gamification implementation at the top layer. The integration of these layers forms a solid foundation for the development of efficient and innovative smart systems. In addition, activity theory is utilized to analyze the interactions among users, technology, and the environment within the context of smart systems. From an ontology standpoint, this research views smart systems as complex socio-technical entities involving human, technological, and process aspects. In terms of epistemology, a multidisciplinary approach is employed to combine knowledge from areas such as computer science, information systems, and human-computer interaction. In the realm of axiology, this study recognizes the ethical values and social implications that must be considered in the development and implementation of smart systems. By integrating the new smart system paradigm using the 4-layer modern systems and activity theory, this research contributes to the understanding of the dynamics and development potential of smart systems. The results of this study can provide guidance for practitioners, researchers, and decision-makers in developing more effective, efficient, and user-oriented smart systems in various contexts.
3
Content available Violence prediction in surveillance videos
EN
Forecasting violence has become a critical obstacle in the field of video monitoring to guarantee public safety. Lately, YOLO (You Only Look Once) has become a popular and effective method for detecting weapons. However, identifying and forecasting violence remains a challenging endeavor. Additionally, the classification results had to be enhanced with semantic information. This study suggests a method for forecasting violent incidents by utilizing Yolov9 and ontology. The authors employed Yolov9 to identify and categorize weapons and individuals carrying them. Ontology is utilized for semantic prediction to assist in predicting violence. Semantic prediction happens through the application of a SPARQL query to the identified frame label. The authors developed a Threat Events Ontology (TEO) to gain semantic significance. The system was tested with a fresh dataset obtained from a variety of security cameras and websites. The VP Dataset comprises 8739 images categorized into 9 classes. The authors examined the outcomes of using Yolov9 in conjunction with ontology in comparison to using Yolov9 alone. The findings show that by combining Yolov9 with ontology, the violence prediction system's semantics and dependability are enhanced. The suggested system achieved a mean Average Precision (mAP) of 83.7 %, 88% for precision, and 76.4% for recall. However, the mAP of Yolov9 without TEO ontology achieved a score of 80.4%. It suggests that this method has a lot of potential for enhancing public safety. The authors finished all training and testing processes on Google Colab's GPU. That reduced the average duration by approximately 90.9%. The result of this work is a next level of object detectors that utilize ontology to improve the semantic significance for real-time end-to-end object detection.
4
Content available remote Linked Labor Market Data: Towards a novel data housing strategy
EN
The labor market is a domain rich in diverse data structures, both quantitative and qualitative, and numerous applications. This leads to challenges in the domain of data warehouse architecture and linked data. In this context, only a few approaches exist to generate linked data sets. For example, the multilingual classification system of European Skills, Competences, Qualifications, and Occupations (ESCO) and the German Labor Market Ontology (GLMO) serve as prominent examples showcasing the pivotal role of ontologies.
5
Content available remote An Ontology to Understand Programming Cocktails
EN
An ever-growing landscape of programming technologies (tools, languages, libraries and frameworks) has rapidly become the norm in many domains of computer programming - Web Development being the most noticeable example. The concurrent use of many compartmentalised technologies has advantages: it allows for flexibility in implementation, while also improving reusability. On the other hand, this proliferation tends to create convoluted development workflows that must be (painstakingly) planned, managed and maintained. The combination of multiple languages, libraries, frameworks and tools (Ingredients) in a single project effectively forms a Programming Cocktail, that can rapidly become cognitive and financially onerous. Aiming at understanding these complex situations, an ontology was created to provide a formal and structured analysis of these cocktails. It emerged from a survey of technologies that several companies are currently using to develop their systems, and aims to provide support for better understanding, classifying and characterising Programming Cocktails. This paper presents not only the ontology itself, but also the consequent knowledge that was constructed and structured through its development.
EN
The process of open innovation based on advanced materials involves the collaborative sharing of knowledge, ideas, and resources among different organizations such as academic institutions, businesses, and government agencies. To accelerate the development of new materials and technologies and to address complex material challenges, it is suggested that Business Process Modeling and Notations (BPMN) and Elementary Multiperspective Material Ontology (EMMO) be closely integrated. In this paper, we examine the integration of EMMO and BPMN through an initial investigation with the aim of streamlining workflows, enhancing communication, and improving the understanding of materials knowledge. We propose a four-step approach to integrate both ontologies which involves ontology alignment, mapping, integration, and validation. Our approach supports faster and more cost-effective research and development processes, leading to more effective and innovative solutions.
7
Content available remote Design and application of an ontology to identify crop areas and improve land use
EN
Agricultural development in Colombia has been characterized by being carried out in a local and traditional way, where important basic aspects are not always considered for the best performance of crops. These characteristics are presented in official government documentation distinguished by its heterogeneity. Ontologies in the domain of agriculture allow the organization and structuring of information to represent knowledge in such a way that the homogeneity of agricultural data dispersed in different types of documents such as manuals, weather reports, and official technical sheets is achieved. In accordance with the above, this work presents the development of an ontology in the agricultural domain to facilitate the identification of cultivation areas and improve land use, relating the basic concepts for an effective crop development according to the specifications and recommendations proposed in Colombian government documentation, using the Methontology methodology. This is achieved with the application of descriptive logic that, based on rules, generates inferences to identify the cultivation options and cultivable areas that present the highest performance. The interaction and use of the ontological model and inference rules are done through a web application made with Python and Flask. The precision of the model is evaluated using historical data of crops produced, making a comparison between the real data and the results obtained through the ontological model, obtaining as a result 80% reliability.
EN
Purpose: The aim of the publication was to visualize the process of creating knowledge on the example of the CP Factory production line. For this purpose, the data contained in the relational databases and data of the operating production line were used. This data was converted to the form required by the CogniPy environment to create a semantic ontology for product personalization. Design/methodology/approach: With the available software based on ontologies and knowledge charts, the possibility of common human-computer reasoning has been opened, especially in production management. Findings: During the activities carried out, the on-conceptualization of the knowledge contained in the production management system was brought to an ontological form. At the moment, the information contained in the database is not very different from the data existing in the relational database. However, further modeling of ontology can be directed towards the creation of rules, logic and axioms of production processes. Practical implications: The operations and transformations performed presented the operation of CogniPy in the process of creating ontology and materializing the graph and queries. The created ontology takes the form of a universal set of knowledge that makes it open to wide integration with other systems. Originality/value: The publication shows the possibilities of using the CogniPy environment in the construction of ontology and semantic product personalization.
EN
This article explores the use of ontology for semi-automatic marine vessel navigation and ship-to-ship communication to mitigate collision risk. Semi-automatic vessel communication is a step towards automatic communication for autonomous ships. Examples of how such communication can be used is discussed, based on a comprehensive analysis of selected marine collisions, with particular attention to the communication conducted on ships. The effectiveness of such communication was assessed and compared. The suggested solutions are based on the review of official reports from accident investigations. The novelties of this work include original ontologies and interfaces. Through this work, it could be possible to fully automate communication processes between ships. In future work, the research results in this work will be used to create a system of automatic communications for manned and autonomous vessels.
EN
Design of distributed complex systems raises several important challenges, such as: confidentiality, data authentication and integrity, semantic contextual knowledge sharing, as well as common and intelligible understanding of the environment. Among the many challenges are semantic heterogeneity that occurs during dynamic knowledge extraction and authorization decisions which need to be taken when a resource is Accessem in an open, dynamic environment. Blockchain offers the tools to protect sensitive personal data and solve reliability issues by providing a secure communication architecture. However, setting-up blockchain-based applications comes with many challenges, including processing and fusing heterogeneous information from various sources. The ontology model explored in this paper relies on a unified knowledge representation method and thus is the backbone of a distributed system aiming to tackle semantic heterogeneity and to model decentralized management of Access control authorizations.We intertwine the blockchain technology with an ontological model to enhance knowledge management processes for distributed systems. Therefore, rather than reling on the mediation of a third party, the approach enhances autonomous decision-making. The proposed approach collects data generated by sensors into higher-level abstraction using n-ary hierarchical structures to describe entities and actions. Moreover, the proposed semantic architecture relies on hyperledger fabric to ensure the checking and authentication of knowledge integrity while preserving privacy.
EN
Glossary of Terms extraction from textual requirements is an impor- tant step in ontology engineering methodologies. Although initially it was intended to be performed manually, last years have shown that some degree of automatization is possible. Based on these promising approaches, we introduce a novel, human inter- pretable, rule-based method named ReqTagger, which can extract candidates for ontology entities (classes or instances) and relations (data or object properties) from textual requirements automatically. We compare ReqTagger to existing automatic methods on an evaluation benchmark consisting of over 550 requirements and tagged with over 1700 entities and relations expected to be extracted. We discuss the quality of ReqTagger and provide details showing why it outperforms other methods. We also publish both the evaluation dataset and the implementation of ReqTagger.
12
Content available remote Ontology-Based Semantic Checking of Data
EN
Semantic checking of railway infrastructure information support data is one of the ways to improve the consistency of information system data and, as a result, increase the safety of train traffic. Existing ontological developments have demonstrated the applicability of description logic for modelling railway transport, but have not paid enough attention to the data resources structure and the railway regulatory support. In this work, the formalization of the tabular presentation of data and the rules of railway transport regulations is carried out using the example of a connection track passport and temporary speed restrictions using ontological means, data wrangling and extraction tools. Ontologies of the various formats data resources and railway station infrastructure, tools for converting and extracting data have been developed. The semantic checking of the compliance of railway information system data with regulatory documents in terms of the connection track passport is carried out on the basis of a multi-level concretization model and integration of ontologies. The mechanisms for implementing the constituent ontologies and their integration are demonstrated by an example. Further research includes ontological checking of natural language normative documents of railway transport.
13
Content available Usage of deep learning in recent applications
EN
Purpose: Deep learning is a predominant branch in machine learning, which is inspired by the operation of the human biological brain in processing information and capturing insights. Machine learning evolved to deep learning, which helps to reduce the involvement of an expert. In machine learning, the performance depends on what the expert extracts manner features, but deep neural networks are self-capable for extracting features. Design/methodology/approach: Deep learning performs well with a large amount of data than traditional machine learning algorithms, and also deep neural networks can give better results with different kinds of unstructured data. Findings: Deep learning is an inevitable approach in real-world applications such as computer vision where information from the visual world is extracted, in the field of natural language processing involving analyzing and understanding human languages in its meaningful way, in the medical area for diagnosing and detection, in the forecasting of weather and other natural processes, in field of cybersecurity to provide a continuous functioning for computer systems and network from attack or harm, in field of navigation and so on. Practical implications: Due to these advantages, deep learning algorithms are applied to a variety of complex tasks. With the help of deep learning, the tasks that had been said as unachievable can be solved. Originality/value: This paper describes the brief study of the real-world application problems domain with deep learning solutions.
14
Content available remote Elaboration of financial fraud ontology
EN
Financial Frauds have dynamically changed, the fraudsters are becoming more sophisticated.There has been an estimated global loss of 5.127 trillion each year due to various forms of financial frauds. Industries like banking, insurance, e-commerce and telecommunication are the main victims of financial frauds. Several techniques have been proposed and applied to understand and detect financial frauds. In this paper we propose an ontology to describe financial frauds and related knowledge. The aim of this ontology is to provide a semantic framework for the detection of financial frauds. Theoretical ontology has been elaborated exploring various sources of information. After describing the research objectives, related works and research methodology, this paper presents details of theoretical ontology. It is followed by its validation using real data sets. Discussion of the obtained results gives some perspectives for the future work.
EN
Phenomenological studies are of fundamental significance to the discipline of architecture and urban design. Gaining insight not the transformation of space by coming into contact with space “in and of itself ” has an essential weight in a period of Heidegger’s world picture. Such rarely encountered non-verbal analyses are presented by Andrzej Piotrowski in his book Architecture of Thought. The authors, in recognition of the weight and originality of Piotrowski’s studies, point to the “axiological trap” that is based on a partial formulation of evaluative hypotheses instead of epistemological analyses. We can therefore accuse them of presentism-an ahistorical perception of phenomena and mechanisms. In this context, it is necessary to bring up the thought of Friedrich Nietzsche, that only the present exist. The past and future are illusions. His concept of time forces a phenomenological perception of reality, here and now, as well as an ontic reflection via an existential, individual experience of each and every one of us.
PL
Dla dziedziny architektura i urbanistyka zasadnicze znaczenie mają badania fenomenologiczne. Poznanie przekształcania przestrzeni poprzez obcowanie z przestrzenią „samą w sobie” ma w dobie heideggerowskiego „światoobrazu” zasadniczą wagę. Takie rzadko spotykane niewerbalne analizy prezentuje Andrzej Piotrowski w swojej książce Architektura myśli (Architecture of Thought). Autorzy artykułu doceniając wagę i oryginalność badań Piotrowskiego, wskazują na „pułapkę aksjologiczną”, która polega na częściowym stawianiu tez wartościujących zamiast epistemologicznych analiz. Można tym samym zarzucić im prezentyzm – ahistoryczne postrzeganie zjawisk i mechanizmów. Trzeba w tym kontekście przypomnieć myśl Fryderyka Nietzsche, że naprawdę istnieje tylko czas teraźniejszy, przeszłość i przyszłość to iluzje. Jego koncepcja czasu zmusza do fenomenologicznego odbioru rzeczywistości, tu i teraz, do refleksji ontycznej poprzez egzystencjonalne indywidualne doświadczenie każdego z nas.
PL
Wśród badań przekształcania przestrzeni brakuje analiz z perspektywy niewerbalnej – fenomenologicznej. Do wyjątkowych prób należy książka Andrzeja Piotrowskiego, Associated Professor University of Minnesota, Architecture of thought. Jego badania posługują się bezpośrednim doświadczaniem przestrzeni i wszystkimi instrumentami wiedzy architektonicznej, wzbogacając naszą wiedzę o dziedzictwie kultury czy historii zabytków. Ogniskują się wokół „nagromadzenia myśli” – przyczyny sprawczej budowania, jako odwrotności „niemożliwości pomyślenia” Theodora W. Adorno. Artykuł, doceniając i szanując wiedzę i perspektywę badawczą autora, polemizuje z epistemologicznymi, a w istocie aksjologicznymi tezami. Proponuje spojrzenie na przekształcanie przestrzeni przez pryzmat ontologii fundamentalnej Martina Heideggera.
EN
There is a lack of analysis from a non-verbal perspective-a phenomenological perspective-among studies on the transformation of space. The book Architecture of Thought by Andrzej Piotrowski, Associated Professor of the University of Minnesota, is one such extraordinary attempt. His study utilizes direct experience of space and all the instruments of architectural knowledge, enhancing our understanding of the heritage of culture or the history of monuments. It is focused on “the accumulation of thoughts”-the driving cause behind building as a reversal of Theodor W. Adorno’s “unthinkability.” This paper, while expressing appreciation for and respect for the author’s knowledge and perspective, argues with his epistemological and essentially axiological arguments. It proposes an outlook on the transformation of space through the prism of Martin Heidegger’s fundamental ontology.
17
Content available Human integration in an ontology-based IoT system
EN
The IoT systems are growing field of automation. In contrast to industrial applications, where the system is custom made for each customer or use case, the home IoT systems can be composed and used in many, sometimes dangerous and unpredictable, ways. This paper presents a system that is based on a common ontology as a unified and universal method of representing the environment including humans. Such approach allows for easy integration of heterogenous devices and declarative definition of services, tasks, and rules ensuring human safety and/or comfort.
18
Content available remote Improving Short Text Classification using Information from DBpedia Ontology
EN
With the emergence of social networks and micro-blogs, a huge amount of short textual documents are generated on a daily basis, for which effective tools for organization and classification are needed. These short text documents have extremely sparse representation, which is the main cause for the poor classification performance. We propose a new approach, where we identify relevant concepts in short text documents with the use of the DBpedia Spotlight framework and enrich the text with information derived from DBpedia ontology, which reduces the sparseness. We have developed six variants of text enrichment methods and tested them on four short text datasets using seven classification algorithms. The obtained results were compared to those of the baseline approach, among themselves, and also to some state-of-the-art text classification methods. Beside classification performance, the influence of the concepts similarity threshold and the size of the training data were also evaluated. The results show that the proposed text enrichment approach significantly improves classification of short texts and is robust with respect to different input sources, domains, and sizes of available training data. The proposed text enrichment methods proved to be beneficial for classification of short text documents, especially when only a small amount of documents are available for training.
EN
The article proposes an approach to the specialized transdisciplinary system development that allows ensuring access to modern achievements in the field of education, science, and technology. Such an approach involves an information-analytical system designing for supporting the educational and research activities of the student youth using the software platform “Trans-disciplinary Educational Dialogues of Applications’ Ontology Systems” (TEDAOS). The TEDAOS software tools provide the formation of ontological models in the form of knowledge prism, that are proposed to be used to present the results of student youth activities, the results of scientific and technical researches held in fundamental and applied research institutions, as well as curriculum, educational and methodological materials. The information-analytical system being developed allow to aggregate and integrate information resources and systems, created in various formats according to different standards and technologies, by using the ontological approach to knowledge representation to support the youth educational and research activities.
EN
This paper deals with a methodology for the implementation of cloud manufacturing (CM) architecture. CM is a current paradigm in which dynamically scalable and virtualized resources are provided to users as services over the Internet. CM is based on the concept of coud computing, which is essential in the Industry 4.0 trend. A CM architecture is employed to map users and providers of manufacturing resources. It reduces costs and development time during a product lifecycle. Some providers use different descriptions of their services, so we propose taking advantage of semantic web technologies such as ontologies to tackle this issue. Indeed, robust tools are proposed for mapping providers’ descriptions and user requests to find the most appropriate service. The ontology defines the stages of the product lifecycle as services. It also takes into account the features of coud computing (storage, computing capacity, etc.). The CM ontology will contribute to intelligent and automated service discovery. The proposed methodology is inspired by the ASDI framework (analysis–specification–design–implementation), which has already been used in the supply chain, healthcare and manufacturing domains. The aim of the new methodology is to propose an easy method of designing a library of components for a CM architecture. An example of the application of this methodology with a simulation model, based on the CloudSim software, is presented. The result can be used to help the industrial decision-makers who want to design CM architectures.
first rewind previous Strona / 9 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.