Thomas Tanay

Learn More
Deep neural networks have been shown to suffer from a surprising weakness: their classification outputs can be changed by small, non-random perturbations of their inputs. This adversarial example phenomenon has been explained as originating from deep networks being " too linear " (Goodfellow et al., 2014). We show here that the linear explanation of(More)
  • 1