Comparative Study of Adversarial Defenses: Adversarial Training and Regularization in Vision Transformers and CNNs

Hiskias Dingeto, Juntae Kim

Research output: Contribution to journalArticlepeer-review

Abstract

Transformer-based models are driving a significant revolution in the field of machine learning at the moment. Among these innovations, vision transformers (ViTs) stand out for their application of transformer architectures to vision-related tasks. By demonstrating performance as good, if not better, than traditional convolutional neural networks (CNNs), ViTs have managed to capture considerable interest in the field. This study focuses on the resilience of ViTs and CNNs in the face of adversarial attacks. Such attacks, which introduce noise into the input of machine learning models to produce incorrect outputs, pose significant challenges to the reliability of machine learning models. Our analysis evaluated the adversarial robustness of CNNs and ViTs by using regularization techniques and adversarial training methods. Adversarial training, in particular, represents a traditional approach to boosting defenses against these attacks. Despite its prominent use, our findings reveal that regularization techniques enable vision transformers and, in most cases, CNNs to enhance adversarial defenses more effectively. Through testing datasets like CIFAR-10 and CIFAR-100, we demonstrate that vision transformers, especially when combined with effective regularization strategies, demonstrate adversarial robustness, even without adversarial training. Two main inferences can be drawn from our findings. Firstly, it emphasizes how effectively vision transformers could strengthen artificial intelligence defenses against adversarial attacks. Secondly, it shows how regularization, which requires much fewer computational resources and covers a wide range of adversarial attacks, can be effective for adversarial defenses. Understanding and improving a model’s resilience to adversarial attacks is crucial for developing secure, dependable systems that can handle the complexity of real-world applications as artificial intelligence and machine learning technologies advance.

Original languageEnglish
Article number2534
JournalElectronics (Switzerland)
Volume13
Issue number13
DOIs
StatePublished - Jul 2024

Keywords

  • adversarial attack
  • adversarial defense
  • adversarial robustness
  • convolutional neural networks
  • machine learning
  • regularization
  • security
  • vision transformers

Fingerprint

Dive into the research topics of 'Comparative Study of Adversarial Defenses: Adversarial Training and Regularization in Vision Transformers and CNNs'. Together they form a unique fingerprint.

Cite this