Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  explainable AI
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
The proliferation of computer-oriented and information digitalisation technologies has become a hallmark across various sectors in today’s rapidly evolving environment. Among these, agriculture emerges as a pivotal sector in need of seamless incorporation of highperformance information technologies to address the pressing needs of national economies worldwide. The aim of the present article is to substantiate scientific and applied approaches to improving the efficiency of computer-oriented agrotechnical monitoring systems by developing an intelligent software component for predicting the probability of occurrence of corn diseases during the full cycle of its cultivation. The object of research is non-stationary processes of intelligent transformation and predictive analytics of soil and climatic data, which are factors of the occurrence and development of diseases in corn. The subject of the research is methods and explainable AI models of intelligent predictive analysis of measurement data on the soil and climatic condition of agricultural enterprises specialised in growing corn. The main scientific and practical effect of the research results is the development of IoT technologies for agrotechnical monitoring through the development of a computer-oriented model based on the ANFIS technique and the synthesis of structural and algorithmic provision for identifying and predicting the probability of occurrence of corn diseases during the full cycle of its cultivation.
EN
The research on intrusion-detection systems (IDSs) has been increasing in recent years. Particularly, this research widely utilizes machine-learning concepts, and it has proven that these concepts are effective with IDSs – particularly, deep neural network-based models have enhanced the rates of the detection of IDSs. In the same instance, these models are turning out to be very complex, and users are unable to track down explanations for the decisions that are made; this indicates the necessity of identifying the explanations behind those decisions to ensure the interpretability of the framed model. In this aspect, this article deals with a proposed model that can explain the obtained predictions. The proposed framework is a combination of a conventional IDS with the aid of a deep neural network and the interpretability of the model predictions. The proposed model utilizes Shapley additive explanations (SHAPs) that mixes the local explainability as well as the global explainability for the enhancement of interpretations in the case of IDS. The proposed model was implemented by using popular data sets (NSL-KDD and UNSW-NB15), and the performance of the framework was evaluated by using their accuracy. The framework achieved accuracy levels of 99.99 and 99.96%, respectively. The proposed framework can identify the top-4 features using local explainability and the top-20 features using global explainability.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.