Corpus ID: 211031984

Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study

@article{Mickisch2020UnderstandingTD,
  title={Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study},
  author={David Mickisch and Felix Assion and Florens Gre{\ss}ner and Wiebke G{\"u}nther and Mariele Katherine Faria Motta},
  journal={ArXiv},
  year={2020},
  volume={abs/2002.01810}
}
  • David Mickisch, Felix Assion, +2 authors Mariele Katherine Faria Motta
  • Published in ArXiv 2020
  • Computer Science, Mathematics
  • Despite achieving remarkable performance on many image classification tasks, state-of-the-art machine learning (ML) classifiers remain vulnerable to small input perturbations. Especially, the existence of adversarial examples raises concerns about the deployment of ML models in safety- and security-critical environments, like autonomous driving and disease detection. Over the last few years, numerous defense methods have been published with the goal of improving adversarial as well as… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 38 REFERENCES

    DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks

    VIEW 16 EXCERPTS
    HIGHLY INFLUENTIAL

    Addressing Model Vulnerability to Distributional Shifts Over Image Transformation Sets

    Deep, Skinny Neural Networks are not Universal Approximators

    VIEW 2 EXCERPTS