Corpus ID: 7700173

Cleverhans V0.1: an Adversarial Machine Learning Library

@article{Goodfellow2016CleverhansVA,
  title={Cleverhans V0.1: an Adversarial Machine Learning Library},
  author={Ian J. Goodfellow and Nicolas Papernot and Patrick Mcdaniel},
  journal={ArXiv},
  year={2016},
  volume={abs/1610.00768}
}
cleverhans is a software library that provides standardized reference implementations of adversarial example construction techniques and adversarial training. The library may be used to develop more robust machine learning models and to provide standardized benchmarks of models’ performance in the adversarial setting. Benchmarks constructed without a standardized implementation of adversarial example construction are not comparable to each other, because a good result may indicate a robust… Expand
Adversarial Minimax Training for Robustness Against Adversarial Examples
TLDR
Computer simulation results show that the proposed method is more robust against adversarial examples generated by some different methods in black box attacks than the conventional adversarial training methods. Expand
Security Matters: A Survey on Adversarial Machine Learning
TLDR
This paper serves to give a comprehensive introduction to a range of aspects of the adversarial deep learning topic, including its foundations, typical attacking and defending strategies, and some extended studies. Expand
MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks
TLDR
The evaluation results show that MulDef (with only up to 5 models in the family) can substantially improve the target model's accuracy on adversarial examples by 22-74% in a white-box attack scenario, while maintaining similar accuracy on legitimate examples. Expand
Machine Learning as an Adversarial Service: Learning Black-Box Adversarial Examples
TLDR
A direct attack against black-box neural networks, that uses another attacker neural network to learn to craft adversarial examples that can transfer to different machine learning models such as Random Forest, SVM, and K-Nearest Neighbor is introduced. Expand
Learning Transferable Adversarial Examples via Ghost Networks
TLDR
Ghost Networks is proposed to improve the transferability of adversarial examples by reproducing the NeurIPS 2017 adversarial competition, and outperforms the No.1 attack submission by a large margin, demonstrating its effectiveness and efficiency. Expand
On the Robustness of Domain Constraints
TLDR
This paper develops techniques to learn domain constraints from data, and shows how the learned constraints can be integrated into the adversarial crafting process, and evaluates the efficacy of the approach in network intrusion and phishing datasets. Expand
Distributionally Robust Deep Learning as a Generalization of Adversarial Training
Machine learning models are vulnerable to adversarial attacks at test time: a correctly classified test example can be slightly perturbed to cause a misclassification. Training models that are robustExpand
Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks
TLDR
It is empirically shown that ensemble methods not only improve the accuracy of neural networks on test data but also increase their robustness against adversarial perturbations. Expand
The Space of Transferable Adversarial Examples
TLDR
It is found that adversarial examples span a contiguous subspace of large (~25) dimensionality, which indicates that it may be possible to design defenses against transfer-based attacks, even for models that are vulnerable to direct attacks. Expand
Adversarial Examples: Attacks and Defenses for Deep Learning
TLDR
The methods for generating adversarial examples for DNNs are summarized, a taxonomy of these methods is proposed, and three major challenges in adversarialExamples are discussed and the potential solutions are discussed. Expand
...
1
2
3
4
5
...

References

SHOWING 1-9 OF 9 REFERENCES
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand
The Limitations of Deep Learning in Adversarial Settings
TLDR
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. Expand
Evasion Attacks against Machine Learning at Test Time
TLDR
This work presents a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Expand
Intriguing properties of neural networks
TLDR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Expand
Theano: A Python framework for fast computation of mathematical expressions
TLDR
The performance of Theano is compared against Torch7 and TensorFlow on several machine learning models and recently-introduced functionalities and improvements are discussed. Expand
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
TLDR
The TensorFlow interface and an implementation of that interface that is built at Google are described, which has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields. Expand
GitHub repository: https://github. com/fchollet/keras
  • GitHub repository: https://github. com/fchollet/keras
  • 2015
Keras. GitHub repository: https://github
  • com/fchollet/keras,
  • 2015
Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412
  • Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412
  • 2014