Corpus ID: 229923152

Generating Adversarial Examples in Chinese Texts Using Sentence-Pieces

@article{Li2020GeneratingAE,
  title={Generating Adversarial Examples in Chinese Texts Using Sentence-Pieces},
  author={Linyang Li and Yunfan Shao and Demin Song and Xipeng Qiu and Xuanjing Huang},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.14769}
}
Adversarial attacks in texts are mostly substitution-based methods that replace words or characters in the original texts to achieve success attacks. Recent methods use pretrained language models as the substitutes generator. While in Chinese, such methods are not applicable since words in Chinese require segmentations first. In this paper, we propose a pre-train language model as the substitutes generator using sentence-pieces to craft adversarial examples in Chinese. The substitutions in the… Expand

Figures and Tables from this paper

Pre-Trained Models: Past, Present and Future
  • Xu Han, Zhengyan Zhang, +19 authors Jun Zhu
  • Computer Science
  • ArXiv
  • 2021
TLDR
A deep look into the history of pretraining, especially its special relation with transfer learning and self-supervised learning, is taken to reveal the crucial position of PTMs in the AI development spectrum. Expand

References

SHOWING 1-10 OF 29 REFERENCES
Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency
TLDR
A new word replacement order determined by both the wordsaliency and the classification probability is introduced, and a greedy algorithm called probability weighted word saliency (PWWS) is proposed for text adversarial attack. Expand
Is BERT Really Robust? Natural Language Attack on Text Classification and Entailment
TLDR
The TextFooler, a general attack framework, to generate natural adversarial texts that outperforms state-of-the-art attacks in terms of success rate and perturbation rate and is utility-preserving, efficient and effective. Expand
HotFlip: White-Box Adversarial Examples for Text Classification
TLDR
An efficient method to generate white-box adversarial examples to trick a character-level neural classifier based on an atomic flip operation, which swaps one token for another, based on the gradients of the one-hot input vectors is proposed. Expand
Generating Natural Language Adversarial Examples
TLDR
A black-box population-based optimization algorithm is used to generate semantically and syntactically similar adversarial examples that fool well-trained sentiment analysis and textual entailment models with success rates of 97% and 70%, respectively. Expand
Adversarial Examples for Evaluating Reading Comprehension Systems
TLDR
This work proposes an adversarial evaluation scheme for the Stanford Question Answering Dataset that tests whether systems can answer questions about paragraphs that contain adversarially inserted sentences without changing the correct answer or misleading humans. Expand
Pre-Training with Whole Word Masking for Chinese BERT
TLDR
This technical report adapt whole word masking in Chinese text, that masking the whole word instead of masking Chinese characters, which could bring another challenge in Masked Language Model (MLM) pre-training task. Expand
SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing
TLDR
SentencePiece, a language-independent subword tokenizer and detokenizer designed for Neural-based text processing, finds that it is possible to achieve comparable accuracy to direct subword training from raw sentences. Expand
Explaining and Harnessing Adversarial Examples
TLDR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand
Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates
TLDR
A simple regularization method is presented, subword regularization, which trains the model with multiple subword segmentations probabilistically sampled during training, and a new sub word segmentation algorithm based on a unigram language model is proposed. Expand
Crafting adversarial input sequences for recurrent neural networks
TLDR
This paper investigates adversarial input sequences for recurrent neural networks processing sequential data and shows that the classes of algorithms introduced previously to craft adversarial samples misclassified by feed-forward neural networks can be adapted to recurrent Neural networks. Expand
...
1
2
3
...