Corpus ID: 195346465

A Learning Approach to Secure Learning

@article{Nguyen2017ALA,
  title={A Learning Approach to Secure Learning},
  author={Linh Nguyen and Arunesh Sinha},
  journal={ArXiv},
  year={2017},
  volume={abs/1709.04447}
}
Deep Neural Networks (DNNs) have been shown to be vulnerable against adversarial examples, which are data points cleverly constructed to fool the classifier. Such attacks can be devastating in practice, especially as DNNs are being applied to ever increasing critical tasks like image recognition in autonomous driving. In this paper, we introduce a new perspective on the problem. We do so by first defining robustness of a classifier to adversarial exploitation. Next, we show that the problem of… Expand
Machine vs Machine: Defending Classifiers Against Learning-based Adversarial Attacks
TLDR
A game framework is proposed to formulate the interaction of attacks and defenses and present the natural notion of the best worst-case defense and attack and simple algorithms to numerically find those solutions motivated by sensitivity penalization are proposed. Expand
MACHINE VS MACHINE: MINIMAX-OPTIMAL DEFENSE AGAINST ADVERSARIAL EXAMPLES
TLDR
It is demonstrated that the defense found by numerical minimax optimization is indeed more robust than non-minimax defenses, and directions for improving the result toward achieving robustness against multiple types of attack classes are discussed. Expand

References

SHOWING 1-10 OF 29 REFERENCES
Towards Deep Neural Network Architectures Robust to Adversarial Examples
TLDR
Deep Contractive Network is proposed, a model with a new end-to-end training procedure that includes a smoothness penalty inspired by the contractive autoencoder (CAE) to increase the network robustness to adversarial examples, without a significant performance penalty. Expand
Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples
TLDR
This work introduces the first practical demonstration that cross-model transfer phenomenon enables attackers to control a remotely hosted DNN with no access to the model, its parameters, or its training data, and introduces the attack strategy of fitting a substitute model to the input-output pairs in this manner, then crafting adversarial examples based on this auxiliary model. Expand
Adversarial Transformation Networks: Learning to Generate Adversarial Examples
TLDR
This work efficiently train feed-forward neural networks in a self-supervised manner to generate adversarial examples against a target network or set of networks, and calls such a network an Adversarial Transformation Network (ATN). Expand
Universal Adversarial Perturbations
TLDR
The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers and outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images. Expand
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced. Expand
The Limitations of Deep Learning in Adversarial Settings
TLDR
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. Expand
Evaluation of Defensive Methods for DNNs against Multiple Adversarial Evasion Models
Due to deep cascades of nonlinear units, deep neural networks (DNNs) can automatically learn non-local generalization priors from data and have achieved high performance in various applications.Expand
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
TLDR
It is concluded that adversarialExamples are significantly harder to detect than previously appreciated, and the properties believed to be intrinsic to adversarial examples are in fact not. Expand
A General Retraining Framework for Scalable Adversarial Classification
TLDR
It is shown that, under natural conditions, the retraining framework minimizes an upper bound on optimal adversarial risk, and how to extend this result to account for approximations of evasion attacks. Expand
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand
...
1
2
3
...