Adversarial Attack and Defense of Structured Prediction Models
@article{Han2020AdversarialAA, title={Adversarial Attack and Defense of Structured Prediction Models}, author={Wenjuan Han and Liwen Zhang and Yong Jiang and K. Tu}, journal={ArXiv}, year={2020}, volume={abs/2010.01610} }
Building an effective adversarial attacker and elaborating on countermeasures for adversarial attacks for natural language processing (NLP) have attracted a lot of research in recent years. However, most of the existing approaches focus on classification problems. In this paper, we investigate attacks and defenses for structured prediction tasks in NLP. Besides the difficulty of perturbing discrete words and the sentence fluency problem faced by attackers in any NLP tasks, there is a specific… Expand
Supplemental Presentations
Presentation Slides
Figures and Tables from this paper
2 Citations
TextFirewall: Omni-Defending Against Adversarial Texts in Sentiment Classification
- Computer Science
- IEEE Access
- 2021
- PDF
References
SHOWING 1-10 OF 39 REFERENCES
Generating Fluent Adversarial Examples for Natural Languages
- Computer Science
- ACL
- 2019
- 48
- Highly Influential
- PDF
Improving Neural Language Modeling via Adversarial Training
- Computer Science, Mathematics
- ICML
- 2019
- 35
- PDF
On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models
- Computer Science
- NAACL-HLT
- 2019
- 55
- PDF
Robust Multilingual Part-of-Speech Tagging via Adversarial Training
- Computer Science
- NAACL-HLT
- 2018
- 52
- PDF
Crafting adversarial input sequences for recurrent neural networks
- Computer Science
- MILCOM 2016 - 2016 IEEE Military Communications Conference
- 2016
- 227
- PDF