Niniejsza praca analizuje podatności systemów wykrywania włamań w sieciach (NIDS) na ataki typu przeciwstawnego. Zaproponowano metodę identyfikacji optymalnych perturbacji w ruchu sieciowym, zwiększających niewykrywalność przez NIDS. Wyniki na zbiorze danych CTU-13 wykazały, że ataki te mogą zredukować dokładność wykrywania z 99,99% do około 40% w najlepszym przypadku. Podkreśla to konieczność zwiększenia odporności systemów opartych na uczeniu maszynowym na takie zagrożenia.
EN
This study investigates the vulnerability of Network Intrusion Detection Systems (NIDS) to adversarial attacks. A prototype method was implemented to identify optimal perturbations that evade NIDS detection. The results from the CTU-13 dataset demonstrated the effectiveness of the attacks, reducing detection accuracy from 99.99% to approximately 40% in the best scenario. These findings underscore the need to enhance the resilience of machine learning-based systems against such threats.
Recent studies show that deep neural networks (DNNs) are extremely vulnerable to elaborately designed adversarial examples. Adversarial training, which uses adversarial examples as training data, has been proven to be one of the most effective methods of defense against adversarial attacks. However, most existing adversarial training methods use adversarial examples relying on first-order gradients, which perform poorly against second-order adversarial attacks and make it difficult to further improve the robustness of the model. In contrast to first-order gradients, second-order gradients provide a more accurate approximation of the loss landscape relative to natural examples. Therefore, our work focuses on constructing second-order adversarial examples and utilizing them for training DNNs. However, second-order optimization involves computing the Hessian inverse, which typically consumes considerable time. To address this issue, we propose an approximation method that transforms the problem into optimization within the Krylov subspace. Compared with the Euclidean space, the Krylov subspace method typically does not require storing the entire matrix. It only needs to store vectors and intermediate results, avoiding explicitly calculating the complete Hessian matrix. We approximate the adversarial direction by a linear combination of Hessian-vector products in the Krylov subspace to reduce the computation cost. Because of the non-symmetrical Hessian matrix, we use the generalized minimum residual to search for an approximate polynomial solution of the matrix. Our method efficiently reduces computational complexity and accelerates the training process. Extensive experiments conducted on the MNIST, CIFAR-10, and ImageNet-100 datasets demonstrate that our adversarial learning using second-order adversarial samples outperforms other first-order methods, leading to improved model robustness against various attacks.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.