Corpus ID: 4302773

Attacking the Madry Defense Model with L1-based Adversarial Examples

@article{Sharma2018AttackingTM,
  title={Attacking the Madry Defense Model with L1-based Adversarial Examples},
  author={Yash Sharma and P. Chen},
  journal={ArXiv},
  year={2018},
  volume={abs/1710.10733}
}
The Madry Lab recently hosted a competition designed to test the robustness of their adversarially trained MNIST model. [...] Key Result These results call into question the use of $L_\infty$ as a sole measure for visual distortion, and further demonstrate the power of EAD at generating robust adversarial examples.Expand
Adversarial Training and Robustness for Multiple Perturbations
Curriculum Adversarial Training
On the Limitation of MagNet Defense Against L1-Based Adversarial Examples
Towards Deep Learning Models Resistant to Large Perturbations
Explore the Transformation Space for Adversarial Images
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks
Adversarial Examples Are a Natural Consequence of Test Error in Noise
ON THE UTILITY OF CONDITIONAL GENERATION BASED MUTUAL INFORMATION FOR CHARACTERIZING ADVERSARIAL SUBSPACES
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 16 REFERENCES
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Towards Deep Learning Models Resistant to Adversarial Attacks
Ensemble Adversarial Training: Attacks and Defenses
Towards Evaluating the Robustness of Neural Networks
Delving into Transferable Adversarial Examples and Black-box Attacks
Adversarial Machine Learning at Scale
Intriguing properties of neural networks
Adam: A Method for Stochastic Optimization
...
1
2
...