Adversarial Gain
@article{Henderson2018AdversarialG, title={Adversarial Gain}, author={Peter Henderson and Koustuv Sinha and N. Ke and Joelle Pineau}, journal={ArXiv}, year={2018}, volume={abs/1811.01302} }
Adversarial examples can be defined as inputs to a model which induce a mistake - where the model output is different than that of an oracle, perhaps in surprising or malicious ways. Original models of adversarial attacks are primarily studied in the context of classification and computer vision tasks. While several attacks have been proposed in natural language processing (NLP) settings, they often vary in defining the parameters of an attack and what a successful attack would look like. The… CONTINUE READING
References
SHOWING 1-10 OF 32 REFERENCES
Generating Adversarial Examples with Adversarial Networks
- Computer Science, Mathematics
- IJCAI
- 2018
- 250
- PDF
Crafting adversarial input sequences for recurrent neural networks
- Computer Science
- MILCOM 2016 - 2016 IEEE Military Communications Conference
- 2016
- 214
- PDF
Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples
- Computer Science
- AAAI
- 2020
- 85
- PDF
Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers
- Computer Science
- 2018 IEEE Security and Privacy Workshops (SPW)
- 2018
- 140
- PDF