Corpus ID: 53221172

Adversarial Gain

@article{Henderson2018AdversarialG,
  title={Adversarial Gain},
  author={Peter Henderson and Koustuv Sinha and N. Ke and Joelle Pineau},
  journal={ArXiv},
  year={2018},
  volume={abs/1811.01302}
}
  • Peter Henderson, Koustuv Sinha, +1 author Joelle Pineau
  • Published 2018
  • Computer Science, Mathematics
  • ArXiv
  • Adversarial examples can be defined as inputs to a model which induce a mistake - where the model output is different than that of an oracle, perhaps in surprising or malicious ways. Original models of adversarial attacks are primarily studied in the context of classification and computer vision tasks. While several attacks have been proposed in natural language processing (NLP) settings, they often vary in defining the parameters of an attack and what a successful attack would look like. The… CONTINUE READING

    References

    SHOWING 1-10 OF 32 REFERENCES
    Generating Natural Adversarial Examples
    • 258
    • PDF
    Generating Adversarial Examples with Adversarial Networks
    • 250
    • PDF
    Generative Adversarial Examples
    • 17
    Adversarial Examples for Natural Language Classification Problems
    • 46
    Explaining and Harnessing Adversarial Examples
    • 6,236
    • PDF
    Crafting adversarial input sequences for recurrent neural networks
    • 214
    • PDF
    Towards Crafting Text Adversarial Samples
    • 108
    • PDF
    Certified Defenses against Adversarial Examples
    • 468
    • PDF
    Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples
    • 85
    • PDF
    Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers
    • 140
    • PDF