TY - GEN
T1 - Understanding Catastrophic Overfitting in Single-step Adversarial Training
AU - Kim, Hoki
AU - Lee, Woojin
AU - Lee, Jaewook
N1 - Publisher Copyright:
Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved
PY - 2021
Y1 - 2021
N2 - Although fast adversarial training has demonstrated both robustness and efficiency, the problem of “catastrophic overfitting” has been observed. This is a phenomenon in which, during single-step adversarial training, robust accuracy against projected gradient descent (PGD) suddenly decreases to 0% after a few epochs, whereas robust accuracy against fast gradient sign method (FGSM) increases to 100%. In this paper, we demonstrate that catastrophic overfitting is very closely related to the characteristic of single-step adversarial training which uses only adversarial examples with the maximum perturbation, and not all adversarial examples in the adversarial direction, which leads to decision boundary distortion and a highly curved loss surface. Based on this observation, we propose a simple method that not only prevents catastrophic overfitting, but also overrides the belief that it is difficult to prevent multi-step adversarial attacks with single-step adversarial training.
AB - Although fast adversarial training has demonstrated both robustness and efficiency, the problem of “catastrophic overfitting” has been observed. This is a phenomenon in which, during single-step adversarial training, robust accuracy against projected gradient descent (PGD) suddenly decreases to 0% after a few epochs, whereas robust accuracy against fast gradient sign method (FGSM) increases to 100%. In this paper, we demonstrate that catastrophic overfitting is very closely related to the characteristic of single-step adversarial training which uses only adversarial examples with the maximum perturbation, and not all adversarial examples in the adversarial direction, which leads to decision boundary distortion and a highly curved loss surface. Based on this observation, we propose a simple method that not only prevents catastrophic overfitting, but also overrides the belief that it is difficult to prevent multi-step adversarial attacks with single-step adversarial training.
UR - http://www.scopus.com/inward/record.url?scp=85102057056&partnerID=8YFLogxK
U2 - 10.1609/aaai.v35i9.16989
DO - 10.1609/aaai.v35i9.16989
M3 - Conference contribution
AN - SCOPUS:85102057056
T3 - 35th AAAI Conference on Artificial Intelligence, AAAI 2021
SP - 8119
EP - 8127
BT - 35th AAAI Conference on Artificial Intelligence, AAAI 2021
PB - Association for the Advancement of Artificial Intelligence
T2 - 35th AAAI Conference on Artificial Intelligence, AAAI 2021
Y2 - 2 February 2021 through 9 February 2021
ER -