A defense method against backdoor attacks on neural networks

Sara Kaviani, Samaneh Shamshiri, Insoo Sohn

Research output: Contribution to journalArticlepeer-review

15 Scopus citations

Abstract

Due to computational complexities of artificial neural networks (ANNs), there is an increasing demand for third parties and MLaaS (machine learning as a service) to take charge of the training procedure. Therefore, making ANNs robust against adversarial attacks has received a lot of attention. Backdoor attacks, which causes targeted mis-classification while the accuracy on clean data is not affected, are among the most efficient attacks. In this paper, we propose a method called link-pruning with scale-freeness (LPSF), in which the dormant threatening links from the neurons in the input layer to other neurons of feed-forward neural network are eliminated according to the information gained from a portion of clean input data and the essential links are strengthened by changing the fully-connected networks to scale-free structures. To the best of our knowledge, it is the first defense method that makes the network significantly robust against backdoor attack (BD) before the network is attacked. LPSF is evaluated on feed-forward neural networks and with malicious MNIST, FMNIST, handwritten Chinese characters and HODA datasets. Through LPSF strategy, we achieve a sufficiently high and stable accuracy on clean data and an exceeding reduction range of 50%−94% for attack success rate.

Original languageEnglish
Article number118990
JournalExpert Systems with Applications
Volume213
DOIs
StatePublished - 1 Mar 2023

Keywords

  • Backdoor attacks
  • Feed-forward neural networks
  • Scale-free networks

Fingerprint

Dive into the research topics of 'A defense method against backdoor attacks on neural networks'. Together they form a unique fingerprint.

Cite this