Corpus ID: 146121275

Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples

@article{Sehwag2019BetterTD,
  title={Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples},
  author={V. Sehwag and A. Bhagoji and Liwei Song and Chawin Sitawarin and Daniel Cullina and M. Chiang and Prateek Mittal},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.01726}
}
A large body of recent work has investigated the phenomenon of evasion attacks using adversarial examples for deep learning systems, where the addition of norm-bounded perturbations to the test inputs leads to incorrect output classification. Previous work has investigated this phenomenon in closed-world systems where training and test inputs follow a pre-specified distribution. However, real-world implementations of deep learning applications, such as autonomous driving and content… Expand
Analyzing the Robustness of Open-World Machine Learning
Provably Robust Detection of Out-of-distribution Data (almost) for free
Adversarial Evasion Noise Attacks Against TensorFlow Object Detection API
BODMAS: An Open Dataset for Learning based Temporal Analysis of PE Malware
Adversarial Robustness on In- and Out-Distribution Improves Explainability
...
1
2
...

References

SHOWING 1-10 OF 135 REFERENCES
Simple Black-Box Adversarial Perturbations for Deep Networks
Delving into Transferable Adversarial Examples and Black-box Attacks
Adversarial Attacks on Neural Network Policies
Delving into adversarial attacks on deep policies
Towards Deep Learning Models Resistant to Adversarial Attacks
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
Adversarial Machine Learning at Scale
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
...
1
2
3
4
5
...