Corpus ID: 204743873

Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets

@article{Balaji2019InstanceAA,
  title={Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets},
  author={Yogesh Balaji and Tom Goldstein and Judy Hoffman},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.08051}
}
  • Yogesh Balaji, Tom Goldstein, Judy Hoffman
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • Adversarial training is by far the most successful strategy for improving robustness of neural networks to adversarial attacks. Despite its success as a defense mechanism, adversarial training fails to generalize well to unperturbed test set. We hypothesize that this poor generalization is a consequence of adversarial training with uniform perturbation radius around every training sample. Samples close to decision boundary can be morphed into a different class under a small perturbation budget… CONTINUE READING

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 11 CITATIONS

    Adversarial Training against Location-Optimized Adversarial Patches

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Intriguing Properties of Adversarial Training at Scale

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Smoothed Inference for Adversarially-Trained Models

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Colored Noise Injection for Training Adversarially Robust Neural Networks

    VIEW 1 EXCERPT
    CITES BACKGROUND

    CAT: Customized Adversarial Training for Improved Robustness

    VIEW 11 EXCERPTS
    CITES BACKGROUND & RESULTS
    HIGHLY INFLUENCED

    Improving Adversarial Robustness Through Progressive Hardening

    VIEW 3 EXCERPTS
    CITES BACKGROUND & RESULTS

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 22 REFERENCES

    Feature Denoising for Improving Adversarial Robustness

    VIEW 9 EXCERPTS
    HIGHLY INFLUENTIAL

    Adversarial vulnerability for any classifier

    VIEW 2 EXCERPTS

    Towards Evaluating the Robustness of Neural Networks

    VIEW 1 EXCERPT

    Spatially Transformed Adversarial Examples

    VIEW 2 EXCERPTS