Thomas Tanay

Learn More
Deep neural networks have been shown to suffer from a surprising weakness: their classification outputs can be changed by small, non-random perturbations of their inputs. This adversarial example phenomenon has been explained as originating from deep networks being “too linear” (Goodfellow et al., 2014). We show here that the linear explanation of(More)
We evaluate transfer representation-learning for anomaly detection using convolutional neural networks by: (i) transfer learning from pretrained networks, and (ii) transfer learning from an auxiliary task by defining sub-categories of the normal class. We empirically show that both approaches offer viable representations for the task of anomaly detection,(More)
  • 1