Improving Back-Propagation by Adding an Adversarial Gradient

  title={Improving Back-Propagation by Adding an Adversarial Gradient},
  author={Arild Nokland},
The back-propagation algorithm is widely used for learning in artificial neural networks. A challenge in machine learning is to create models that generalize to new data samples not seen in the training data. Recently, a common flaw in several machine learning algorithms was discovered: small perturbations added to the input data lead to consistent misclassification of data samples. Samples that easily mislead the model are called adversarial examples. Training a ”maxout” network on adversarial… CONTINUE READING
Highly Cited
This paper has 29 citations. REVIEW CITATIONS


Publications citing this paper.


Publications referenced by this paper.
Showing 1-10 of 17 references

Semisupervised learning with ladder

  • Rasmus, Antti, +7 authors Tapani
  • network. CoRR,
  • 2015

Intriguing properties of neural networks

  • Szegedy, Christian, +11 authors Rob
  • CoRR, abs/1312.6199,
  • 2013
1 Excerpt

Similar Papers

Loading similar papers…