Ograniczanie wyników
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 1

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
The research on intrusion-detection systems (IDSs) has been increasing in recent years. Particularly, this research widely utilizes machine-learning concepts, and it has proven that these concepts are effective with IDSs – particularly, deep neural network-based models have enhanced the rates of the detection of IDSs. In the same instance, these models are turning out to be very complex, and users are unable to track down explanations for the decisions that are made; this indicates the necessity of identifying the explanations behind those decisions to ensure the interpretability of the framed model. In this aspect, this article deals with a proposed model that can explain the obtained predictions. The proposed framework is a combination of a conventional IDS with the aid of a deep neural network and the interpretability of the model predictions. The proposed model utilizes Shapley additive explanations (SHAPs) that mixes the local explainability as well as the global explainability for the enhancement of interpretations in the case of IDS. The proposed model was implemented by using popular data sets (NSL-KDD and UNSW-NB15), and the performance of the framework was evaluated by using their accuracy. The framework achieved accuracy levels of 99.99 and 99.96%, respectively. The proposed framework can identify the top-4 features using local explainability and the top-20 features using global explainability.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.