Bridged adversarial training

Hoki Kim, Woojin Lee, Sungyoon Lee, Jaewook Lee

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Adversarial robustness is considered a required property of deep neural networks. In this study, we discover that adversarially trained models might have significantly different characteristics in terms of margin and smoothness, even though they show similar robustness. Inspired by the observation, we investigate the effect of different regularizers and discover the negative effect of the smoothness regularizer on maximizing the margin. Based on the analyses, we propose a new method called bridged adversarial training that mitigates the negative effect by bridging the gap between clean and adversarial examples. We provide theoretical and empirical evidence that the proposed method provides stable and better robustness, especially for large perturbations.

Original languageEnglish
Pages (from-to)266-282
Number of pages17
JournalNeural Networks
Volume167
DOIs
StatePublished - Oct 2023

Keywords

  • Adversarial defense
  • Adversarial robustness
  • Adversarial training
  • Neural networks

Fingerprint

Dive into the research topics of 'Bridged adversarial training'. Together they form a unique fingerprint.

Cite this