Adversarial Reprogramming of Neural Networks

  title={Adversarial Reprogramming of Neural Networks},
  author={Gamaleldin F. Elsayed and Ian J. Goodfellow and Jascha Sohl-Dickstein},
Deep neural networks are susceptible to adversarial attacks. In computer vision, well-crafted perturbations to images can cause neural networks to make mistakes such as identifying a panda as a gibbon or confusing a cat with a computer. Previous adversarial examples have been designed to degrade performance of models or cause machine learning models to… CONTINUE READING