Corpus ID: 54441974

MixTrain: Scalable Training of Verifiably Robust Neural Networks

@inproceedings{Wang2018MixTrainST,
  title={MixTrain: Scalable Training of Verifiably Robust Neural Networks},
  author={Shiqi Wang and Yizheng Chen and Ahmed Abdou and Suman Sekhar Jana},
  year={2018}
}
Making neural networks robust against adversarial inputs has resulted in an arms race between new defenses and attacks that break them. The most promising defenses, adversarially robust training and verifiably robust training, have limitations that severely restrict their practical applications. The adversarially robust training only makes the networks robust against a subclass of attackers (e.g., first-order gradient-based attacks), leaving it vulnerable to other attacks. In this paper, we… Expand
SoK: Certified Robustness for Deep Neural Networks
TLDR
This paper provides a taxonomy for the robustness verification and training approaches, and provides an open-sourced unified platform to evaluate 20+ representative verification and corresponding robust training approaches on a wide range of DNNs. Expand
Adversarially Robust Classifier with Covariate Shift Adaptation
TLDR
This paper shows that simple adaptive batch normalization (BN) technique that involves re-estimating the batch-normalization parameters during inference, can significantly improve the robustness of adversarially trained models for any random perturbations, including the Gaussian noise. Expand
Adversarial Training and Provable Robustness: A Tale of Two Objectives
TLDR
A principled framework that combines adversarial training and provable robustness verification for training certifiably robust neural networks and develops a novel gradient-descent technique that can eliminate bias in stochastic multi-gradients is proposed. Expand
Second-Order Provable Defenses against Adversarial Attacks
TLDR
This paper shows that if the eigenvalues of the Hessian of the network are bounded, the authors can compute a robustness certificate in the $l_2$ norm efficiently using convex optimization and derives a computationally-efficient differentiable upper bound on the curvature of a deep network. Expand
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges
  • Jinyuan Jia, N. Gong
  • Computer Science, Mathematics
  • Adaptive Autonomous Secure Cyber Systems
  • 2020
TLDR
This chapter takes defending against inference attacks in online social networks as an example to illustrate the opportunities and challenges of defending against ML-equipped inference attacks via adversarial examples. Expand
A Survey on Deep Learning for Ultra-Reliable and Low-Latency Communications Challenges on 6G Wireless Systems
TLDR
Improvements to the multi-level architecture by enabling artificial intelligence (AI) in URLLC providing a new technique in designing wireless networks is highlighted to facilitate the creation of a data-driven AI system, 6G networks for intelligent devices, and technologies based on an effective learning capability. Expand
Notes on Lipschitz Margin, Lipschitz Margin Training, and Lipschitz Margin p-Values for Deep Neural Network Classifiers
TLDR
A local class purity theorem for Lipschitz continuous, half-rectified DNN classifiers is provided and how to train to achieve classification margin about training samples is discussed. Expand
Notes on Margin Training and Margin p-Values for Deep Neural Network Classifiers.
TLDR
A new local class-purity theorem for Lipschitz continuous DNN classifiers is provided and how to achieve classification margin for training samples is discussed. Expand
Towards Robustness against Unsuspicious Adversarial Examples
TLDR
This work proposes an approach for modeling suspiciousness by leveraging cognitive salience, and shows that adversarial training with dual-perturbation attacks yields classifiers that are more robust to these than state-of-the-art robust learning approaches, and comparable in terms of robustness to conventional attacks. Expand

References

SHOWING 1-10 OF 70 REFERENCES
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee. Expand
Certified Robustness to Adversarial Examples with Differential Privacy
TLDR
This paper presents the first certified defense that both scales to large networks and datasets and applies broadly to arbitrary model types, based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism. Expand
Ensemble Adversarial Training: Attacks and Defenses
TLDR
This work finds that adversarial training remains vulnerable to black-box attacks, where perturbations computed on undefended models are transferred to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. Expand
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced. Expand
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
TLDR
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs. Expand
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
TLDR
This paper provides a theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and proposes to use the Extreme Value Theory for efficient evaluation, which yields a novel robustness metric called CLEVER, which is short for Cross LPschitz Extreme Value for nEtwork Robustness. Expand
Efficient Defenses Against Adversarial Attacks
TLDR
This work proposes a new defense method based on practical observations which is easy to integrate into models and performs better than state-of-the-art defenses against adversarial attacks against deep neural networks. Expand
Certifying Some Distributional Robustness with Principled Adversarial Training
TLDR
This work provides a training procedure that augments model parameter updates with worst-case perturbations of training data and efficiently certify robustness for the population loss by considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball. Expand
Certified Defenses against Adversarial Examples
TLDR
This work proposes a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value, providing an adaptive regularizer that encourages robustness against all attacks. Expand
Training verified learners with learned verifiers
TLDR
Experiments show that the predictor-verifier architecture able to train networks to achieve state of the art verified robustness to adversarial examples with much shorter training times can be scaled to produce the first known verifiably robust networks for CIFAR-10. Expand
...
1
2
3
4
5
...