Adversarial Attack and Defense of Structured Prediction Models

@article{Han2020AdversarialAA,
  title={Adversarial Attack and Defense of Structured Prediction Models},
  author={Wenjuan Han and Liwen Zhang and Yong Jiang and K. Tu},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.01610}
}
Building an effective adversarial attacker and elaborating on countermeasures for adversarial attacks for natural language processing (NLP) have attracted a lot of research in recent years. However, most of the existing approaches focus on classification problems. In this paper, we investigate attacks and defenses for structured prediction tasks in NLP. Besides the difficulty of perturbing discrete words and the sentence fluency problem faced by attackers in any NLP tasks, there is a specific… Expand
2 Citations

References

SHOWING 1-10 OF 39 REFERENCES
Generating Natural Adversarial Examples
  • 275
  • PDF
Generating Fluent Adversarial Examples for Natural Languages
  • 48
  • Highly Influential
  • PDF
Improving Neural Language Modeling via Adversarial Training
  • 35
  • PDF
On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models
  • 55
  • PDF
Robust Multilingual Part-of-Speech Tagging via Adversarial Training
  • 52
  • PDF
Towards Crafting Text Adversarial Samples
  • 113
  • PDF
Crafting adversarial input sequences for recurrent neural networks
  • 227
  • PDF
Ensemble Adversarial Training: Attacks and Defenses
  • 1,177
  • PDF
Adversarial Texts with Gradient Methods
  • 43
  • Highly Influential
  • PDF
...
1
2
3
4
...