Learning Interpretable Features via Adversarially Robust Optimization

@article{Khakzar2019LearningIF,
  title={Learning Interpretable Features via Adversarially Robust Optimization},
  author={Ashkan Khakzar and Shadi Albarqouni and Nassir Navab},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.03767}
}
  • Ashkan Khakzar, Shadi Albarqouni, Nassir Navab
  • Published 2019
  • Computer Science
  • ArXiv
  • Neural networks are proven to be remarkably successful for classification and diagnosis in medical applications. However, the ambiguity in the decision-making process and the interpretability of the learned features is a matter of concern. In this work, we propose a method for improving the feature interpretability of neural network classifiers. Initially, we propose a baseline convolutional neural network with state of the art performance in terms of accuracy and weakly supervised localization… CONTINUE READING

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 14 REFERENCES

    Explaining and Harnessing Adversarial Examples

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Learning Deep Features for Discriminative Localization

    VIEW 2 EXCERPTS

    CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Densely Connected Convolutional Networks

    VIEW 1 EXCERPT