Corpus ID: 214612493

Robust Out-of-distribution Detection in Neural Networks

@article{Chen2020RobustOD,
  title={Robust Out-of-distribution Detection in Neural Networks},
  author={J. Chen and Yixuan Li and X. Wu and Yingyu Liang and S. Jha},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.09711}
}
  • J. Chen, Yixuan Li, +2 authors S. Jha
  • Published 2020
  • Computer Science, Mathematics
  • ArXiv
  • Detecting anomalous inputs is critical for safely deploying deep learning models in the real world. Existing approaches for detecting out-of-distribution (OOD) examples work well when evaluated on natural samples drawn from a sufficiently different distribution than the training data distribution. However, in this paper, we show that existing detection mechanisms can be extremely brittle when evaluating on inputs with minimal adversarial perturbations which don't change their semantics… CONTINUE READING
    7 Citations

    Figures, Tables, and Topics from this paper

    CADE: Detecting and Explaining Concept Drift Samples for Security Applications
    • PDF
    Generative Classifiers as a Basis for Trustworthy Computer Vision
    • 1
    Anomalous Example Detection in Deep Learning: A Survey
    • 3
    • PDF

    References

    SHOWING 1-10 OF 52 REFERENCES
    A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
    • 283
    • Highly Influential
    • PDF
    Analyzing the Robustness of Open-World Machine Learning
    • 12
    • PDF
    Self-Supervised Learning for Generalizable Out-of-Distribution Detection
    • 12
    • PDF
    The Limitations of Deep Learning in Adversarial Settings
    • 1,914
    • PDF
    Likelihood Ratios for Out-of-Distribution Detection
    • 105
    • PDF
    Towards Deep Learning Models Resistant to Adversarial Attacks
    • 2,931
    • Highly Influential
    • PDF
    Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
    • 280
    • PDF
    Deep Anomaly Detection with Outlier Exposure
    • 246
    • Highly Influential
    • PDF
    Adversarial Robustness of Flow-Based Generative Models
    • 5
    • PDF
    Explaining and Harnessing Adversarial Examples
    • 6,521
    • Highly Influential
    • PDF