Versatile Weight Attack via Flipping Limited Bits

@article{Bai2022VersatileWA,
  title={Versatile Weight Attack via Flipping Limited Bits},
  author={Jiawang Bai and Baoyuan Wu and Zhifeng Li and Shutao Xia},
  journal={ArXiv},
  year={2022},
  volume={abs/2207.12405}
}
—To explore the vulnerability of deep neural networks (DNNs), many attack paradigms have been well studied, such as the poisoning-based backdoor attack in the training stage and the adversarial attack in the inference stage. In this paper, we study a novel attack paradigm, which modifies model parameters in the deployment stage. Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack, where the effectiveness term could be… 

References

SHOWING 1-10 OF 73 REFERENCES

Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits

A novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes, is studied, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method.

Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack

The experiments show that, for BFA to achieve the identical prediction accuracy degradation (e.g., below 11\% on CIFAR-10), it requires 19.3x and 480.1x more effective malicious bit-flips on ResNet-20 and VGG-11 respectively, compared to defend-free counterparts.

Bit-Flip Attack: Crushing Neural Network With Progressive Bit Search

This work is the first to propose a novel DNN weight attack methodology called Bit-Flip Attack (BFA) which can crush a neural network through maliciously flipping extremely small amount of bits within its weight storage memory system (i.e., DRAM).

T-BFA: Targeted Bit-Flip Adversarial Weight Attack

This paper proposes the first work of targetedBFA based (T-BFA) adversarial weight attack on DNN models, which can intentionally mislead selected inputs to a target output class through a novel class-dependent weight bit ranking algorithm.

TBT: Targeted Neural Network Attack With Bit Trojan

This work proposes a novel Targeted Bit Trojan method, which can insert a targeted neural Trojan into a DNN through bit-flip attack, and demonstrates that flipping only several vulnerable bits identified by the method can transform a fully functional DNN model into a Trojan-infected model.

Defending Bit-Flip Attack through DNN Weight Reconstruction

This work proposes a novel weight reconstruction method as a countermeasure to adversarial attacks on neural network weights, specifically, during inference, the weights are reconstructed such that the weight perturbation due to BFA is minimized or diffused to the neighboring weights.

Fault injection attack on deep neural network

This paper investigates the impact of fault injection attacks on DNN, wherein attackers try to misclassify a specified input pattern into an adversarial class by modifying the parameters used in DNN via fault injection.

RADAR: Run-time Adversarial Weight Attack Detection and Accuracy Recovery

RADAR, a Run-time adversarial weight Attack Detection and Accuracy Recovery scheme to protect DNN weights against PBFA, and can restore the accuracy from below 1% caused by 10 bit flips to above 69%.

Hidden Trigger Backdoor Attacks

This work proposes a novel form of backdoor attack where poisoned data look natural with correct labels and also more importantly, the attacker hides the trigger in the poisoned data and keeps the trigger secret until the test time.

Backdoor Learning: A Survey

This article summarizes and categorizes existing backdoor attacks and defenses based on their characteristics, and provides a unified framework for analyzing poisoning-based backdoor attacks, and summarizes widely adopted benchmark datasets.
...