• Corpus ID: 202766883

Model Compression with Adversarial Robustness: A Unified Optimization Framework

@inproceedings{Gui2019ModelCW,
  title={Model Compression with Adversarial Robustness: A Unified Optimization Framework},
  author={Shupeng Gui and Haotao Wang and Haichuan Yang and Chen Yu and Zhangyang Wang and Ji Liu},
  booktitle={NeurIPS},
  year={2019}
}
Deep model compression has been extensively studied, and state-of-the-art methods can now achieve high compression ratios with minimal accuracy loss. This paper studies model compression through a different lens: could we compress models without hurting their robustness to adversarial attacks, in addition to maintaining accuracy? Previous literature suggested that the goals of robustness and compactness might sometimes contradict. We propose a novel Adversarially Trained Model Compression (ATMC… 

Figures and Tables from this paper

RoCo-NAS: Robust and Compact Neural Architecture Search
TLDR
This paper proposes the use of previously generated adversarial examples as an objective to evaluate the robustness of models in addition to the number of floating-point operations to assess model complexity i.e. compactness, and evolves an architecture that is up to 7% more accurate against adversarial samples than its more complex architecture counterpart.
Improving the Robustness of Model Compression by On-Manifold Adversarial Training
TLDR
The experiment shows that on-manifold adversarial training can be effective in building robust classifiers, especially when the model compression rate is high, and the relationship between the model sizes and the prediction performance on noisy perturbations is investigated.
GAN Slimming: All-in-One GAN Compression by A Unified Optimization Framework
TLDR
This work proposes the first unified optimization framework combining multiple compression means for GAN compression, dubbed GAN Slimming (GS), which seamlessly integrates three mainstream compression techniques: model distillation, channel pruning and quantization, together with the GAN minimax objective, into one unified optimization form that can be efficiently optimized from end to end.
Adversarial Robust Model Compression using In-Train Pruning
TLDR
This work combines adversarial training and model pruning in a joint formulation of the fundamental learning objective during training, which allows for a classifier which is robust against attacks and enables better compression of the model, reducing its computational effort.
Robust CNN Compression Framework for Security-Sensitive Embedded Systems
TLDR
This paper proposes a compression framework to produce compressed CNNs robust against adversarial examples and provides a solution algorithm based on the proximal gradient method, which is more memory-efficient than the popular ADMM-based compression approaches.
Blind Adversarial Pruning: Balance Accuracy, Efficiency and Robustness
TLDR
It is concluded that the robustness of the pruned model drastically varies with different pruning processes, especially in response to attacks with large strength, and an approach called blind adversarial pruning (BAP) is proposed, which introduces the idea ofblind adversarial training into the gradual pruning process.
Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free
TLDR
This paper proposes a Once-for-all Adversarial Training framework, built on an innovative model-conditional training framework, with a controlling hyper-parameter as the input, that allows for the joint trade-off among accuracy, robustness and runtime efficiency.
Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks
TLDR
The method of Generalized Depthwise-Separable (GDWS) convolution is proposed – an efficient, universal, post-training approximation of a standard 2D convolution that dramatically improves the throughput of aStandard pre-trained network on real-life hardware while preserving its robustness.
Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network
TLDR
A novel adversarial pruning method, Masking Adversarial Damage (MAD) is proposed that employs second-order information of adversarial loss and can accurately estimate adversarial saliency for model parameters and determine which parameters can be pruned without weakening adversarial robustness.
Non-Uniform Adversarially Robust Pruning
TLDR
It is shown that employing non-uniform compression strategies allows to significantly improve clean data accuracy as well as adversarial robustness under high overall compression and can be used as a plug-in replacement for uniform compression ratios of existing state-of-the-art approaches.
...
...

References

SHOWING 1-10 OF 74 REFERENCES
Adversarial Robustness vs. Model Compression, or Both?
TLDR
It is found that weight pruning is essential for reducing the network model size in the adversarial setting; training a small model from scratch even with inherited initialization from the large model cannot achieve neither adversarial robustness nor high standard accuracy.
DEFENSIVE QUANTIZATION: WHEN EFFICIENCY MEETS ROBUSTNESS
TLDR
A novel Defensive Quantization (DQ) method is proposed by controlling the Lipschitz constant of the network during quantization, such that the magnitude of the adversarial noise remains non-expansive during inference.
Sparse DNNs with Improved Adversarial Robustness
TLDR
It is demonstrated that an appropriately higher model sparsity implies better robustness of nonlinear DNNs, whereas over-sparsified models can be more difficult to resist adversarial examples.
To compress or not to compress: Understanding the Interactions between Adversarial Attacks and Neural Network Compression
TLDR
The extent to which adversarial samples are transferable between uncompressed and compressed DNNs is investigated and it is found that adversarial sample remain transferable for both pruned and quantised models.
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
TLDR
Two feature squeezing methods are explored: reducing the color bit depth of each pixel and spatial smoothing, which are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.
Stochastic Activation Pruning for Robust Adversarial Defense
TLDR
Stochastic Activation Pruning (SAP) is proposed, a mixed strategy for adversarial defense that prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate.
Universal Adversarial Perturbations Against Semantic Image Segmentation
TLDR
This work presents an approach for generating (universal) adversarial perturbations that make the network yield a desired target segmentation as output and shows empirically that there exist barely perceptible universal noise patterns which result in nearly the same predicted segmentation for arbitrary inputs.
Synthesizing Robust Adversarial Examples
TLDR
The existence of robust 3D adversarial objects is demonstrated, and the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations is presented, which synthesizes two-dimensional adversarial images that are robust to noise, distortion, and affine transformation.
On Compressing Deep Models by Low Rank and Sparse Decomposition
TLDR
A unified framework integrating the low-rank and sparse decomposition of weight matrices with the feature map reconstructions is proposed, which can significantly reduce the parameters for both convolutional and fully-connected layers.
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
...
...