Backdoor Attacks on Time Series: A Generative Approach
@article{Jiang2022BackdoorAO, title={Backdoor Attacks on Time Series: A Generative Approach}, author={Yujing Jiang and Xingjun Ma and Sarah Monazam Erfani and James Bailey}, journal={ArXiv}, year={2022}, volume={abs/2211.07915} }
—Backdoor attacks have emerged as one of the major security threats to deep learning models as they can easily control the model’s test-time predictions by pre-injecting a backdoor trigger into the model at training time. While backdoor attacks have been extensively studied on images, few works have investigated the threat of backdoor attacks on time series data. To fill this gap, in this paper we present a novel generative approach for time series backdoor attacks against deep learning based…
Figures and Tables from this paper
One Citation
WITHIN AN I MAGE
- 2023
References
SHOWING 1-10 OF 46 REFERENCES
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
- Computer ScienceECCV
- 2020
Refool is proposed, a new type of backdoor attack inspired by an important natural phenomenon: reflection to plant reflections as backdoor into a victim model and can attack state-of-the-art DNNs with high success rate, and is resistant to state of theart backdoor defenses.
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
- Computer ScienceArXiv
- 2017
This work considers a new type of attacks, called backdoor attacks, where the attacker's goal is to create a backdoor into a learning-based authentication system, so that he can easily circumvent the system by leveraging the backdoor.
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
- Computer ScienceNeurIPS
- 2021
This paper introduces the concept of anti-backdoor learning, aiming to train clean models given backdoor-poisoned data, and proposes a general learning scheme, Anti-Backdoor Learning (ABL), to automatically prevent backdoor attacks during training.
Invisible Backdoor Attack with Sample-Specific Triggers
- Computer Science2021 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2021
Inspired by the recent advance in DNN-based image steganography, sample-specific invisible additive noises as backdoor triggers are generated by encoding an attacker-specified string into benign images through an encoder-decoder network.
Clean-Label Backdoor Attacks on Video Recognition Models
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
This paper proposes the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models, a situation where backdoor attacks are likely to be challenged by the above 4 strict conditions.
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting
- Computer ScienceArXiv
- 2021
This work proposes Subnet Replacement Attack (SRA), which is capable of embedding backdoor into DNNs by directly modifying a limited number of model parameters, and abandon the strong white-box assumption widely adopted in existing studies.
Backdoor Attacks on Crowd Counting
- Computer ScienceACM Multimedia
- 2022
This paper proposes two novel Density Manipulation Backdoor Attacks (DMBA- and DMBA+) to attack the model to produce arbitrarily large or small density estimations, and provides an in-depth analysis of the unique challenges of backdooring crowd counting models.
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
- Computer ScienceICLR
- 2021
This paper proposes a novel defense framework Neural Attention Distillation (NAD), which utilizes a teacher network to guide the finetuning of the backdoored student network on a small clean subset of data such that the intermediate-layer attention of the student network aligns with that of the teacher network.
TrojanFlow: A Neural Backdoor Attack to Deep Learning-based Network Traffic Classifiers
- Computer ScienceIEEE INFOCOM 2022 - IEEE Conference on Computer Communications
- 2022
The results show that the TrojanFlow attack is stealthy, efficient, and highly robust against existing neural backdoor mitigation schemes.
Label-Consistent Backdoor Attacks
- Computer ScienceArXiv
- 2019
This work leverages adversarial perturbations and generative models to execute efficient, yet label-consistent, backdoor attacks, based on injecting inputs that appear plausible, yet are hard to classify, hence causing the model to rely on the (easier-to-learn) backdoor trigger.