Corpus ID: 173990929

Learning Perceptually-Aligned Representations via Adversarial Robustness

@article{Engstrom2019LearningPR,
  title={Learning Perceptually-Aligned Representations via Adversarial Robustness},
  author={L. Engstrom and Andrew Ilyas and Shibani Santurkar and D. Tsipras and B. Tran and A. Madry},
  journal={ArXiv},
  year={2019},
  volume={abs/1906.00945}
}
  • L. Engstrom, Andrew Ilyas, +3 authors A. Madry
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • Many applications of machine learning require models that are human-aligned, i.e., that make decisions based on human-meaningful information about the input. We identify the pervasive brittleness of deep networks' learned representations as a fundamental barrier to attaining this goal. We then re-cast robust optimization as a tool for enforcing human priors on the features learned by deep neural networks. The resulting robust feature representations turn out to be significantly more aligned… CONTINUE READING
    Attack to Explain Deep Representation
    1
    Decoder-free Robustness Disentanglement without (Additional) Supervision
    Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization
    Fast Training of Deep Neural Networks Robust to Adversarial Perturbations
    Improving sample diversity of a pre-trained, class-conditional GAN by changing its class embeddings

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 49 REFERENCES
    Invertible Residual Networks
    135
    i-RevNet: Deep Invertible Networks
    121
    Excessive Invariance Causes Adversarial Vulnerability
    63