Towards undetectable adversarial attack on time series classification

Research output: Contribution to journalArticlepeer-review

Abstract

Although deep learning models have shown superior performance for time series classification, prior studies have recently discovered that small perturbations can fool various time series models. This vulnerability poses a serious threat that can cause malfunctions in real-world systems, such as Internet-of-Things (IoT) devices and industrial control systems. To defend these systems against adversarial time series, recent studies have proposed a detection method using time series characteristics. In this paper, however, we reveal that this detection-based defense can be easily circumvented. Through an extensive investigation into existing adversarial attacks and generated adversarial time series examples, we discover that they tend to ignore the trends in local areas and add excessive noise to the original examples. Based on the analyses, we propose a new adaptive attack, called trend-adaptive interval attack (TIA), that generates a hardly detectable adversarial time series by adopting trend-adaptive loss and gradient-based interval selection. Our experiments demonstrate that the proposed method successfully maintains the important features of the original time series and deceives diverse time series models without being detected.

Original languageEnglish
Article number122216
JournalInformation Sciences
Volume715
DOIs
StatePublished - Oct 2025

Keywords

  • Adversarial attack
  • Deep learning
  • Detection
  • Time series

Fingerprint

Dive into the research topics of 'Towards undetectable adversarial attack on time series classification'. Together they form a unique fingerprint.

Cite this