Adversarial Reprogramming of Neural Networks

@article{Elsayed2018AdversarialRO,
  title={Adversarial Reprogramming of Neural Networks},
  author={Gamaleldin F. Elsayed and Ian J. Goodfellow and Jascha Sohl-Dickstein},
  journal={CoRR},
  year={2018},
  volume={abs/1806.11146}
}
Deep neural networks are susceptible to adversarial attacks. In computer vision, well-crafted perturbations to images can cause neural networks to make mistakes such as identifying a panda as a gibbon or confusing a cat with a computer. Previous adversarial examples have been designed to degrade performance of models or cause machine learning models to… CONTINUE READING