Ten serwis zostanie wyłączony 2025-02-11.
Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 1

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  Krylov subspace
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Recent studies show that deep neural networks (DNNs) are extremely vulnerable to elaborately designed adversarial examples. Adversarial training, which uses adversarial examples as training data, has been proven to be one of the most effective methods of defense against adversarial attacks. However, most existing adversarial training methods use adversarial examples relying on first-order gradients, which perform poorly against second-order adversarial attacks and make it difficult to further improve the robustness of the model. In contrast to first-order gradients, second-order gradients provide a more accurate approximation of the loss landscape relative to natural examples. Therefore, our work focuses on constructing second-order adversarial examples and utilizing them for training DNNs. However, second-order optimization involves computing the Hessian inverse, which typically consumes considerable time. To address this issue, we propose an approximation method that transforms the problem into optimization within the Krylov subspace. Compared with the Euclidean space, the Krylov subspace method typically does not require storing the entire matrix. It only needs to store vectors and intermediate results, avoiding explicitly calculating the complete Hessian matrix. We approximate the adversarial direction by a linear combination of Hessian-vector products in the Krylov subspace to reduce the computation cost. Because of the non-symmetrical Hessian matrix, we use the generalized minimum residual to search for an approximate polynomial solution of the matrix. Our method efficiently reduces computational complexity and accelerates the training process. Extensive experiments conducted on the MNIST, CIFAR-10, and ImageNet-100 datasets demonstrate that our adversarial learning using second-order adversarial samples outperforms other first-order methods, leading to improved model robustness against various attacks.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.