On Robustness of Neural Semantic Parsers

@article{Huang2021OnRO,
  title={On Robustness of Neural Semantic Parsers},
  author={Shuo Huang and Zhuang Li and Lizhen Qu and Lei Pan},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.01563}
}
Semantic parsing maps natural language (NL) utterances into logical forms (LFs), which underpins many advanced NLP problems. Semantic parsers gain performance boosts with deep neural networks, but inherit vulnerabilities against adversarial examples. In this paper, we provide the first empirical study on the robustness of semantic parsers in the presence of adversarial attacks. Formally, adversaries of semantic parsing are considered to be the perturbed utterance-LF pairs, whose utterances have… Expand
1 Citations

Figures and Tables from this paper

Adversarial Training Methods for Cross-Domain Question Answering
  • 2021
Despite recent work demonstrated considerable advantages on many NLP tasks by pre-training on a large corpus of text and then fine-tuning on a specific task, this approach can hardly scale as itExpand

References

SHOWING 1-10 OF 40 REFERENCES
Coarse-to-Fine Decoding for Neural Semantic Parsing
TLDR
This work proposes a structure-aware neural architecture which decomposes the semantic parsing process into two stages, and shows that this approach consistently improves performance, achieving competitive results despite the use of relatively simple decoders. Expand
Question Generation from SQL Queries Improves Neural Semantic Parsing
TLDR
This study conducts a study on WikiSQL, the largest hand-annotated semantic parsing dataset to date, and demonstrates that question generation is an effective method that empowers us to learn a state-of-the-art neural network based semantic parser with thirty percent of the supervised training data. Expand
Data Recombination for Neural Semantic Parsing
TLDR
Data recombination improves the accuracy of the RNN model on three semantic parsing datasets, leading to new state-of-the-art performance on the standard GeoQuery dataset for models with comparable supervision. Expand
Language to Logical Form with Neural Attention
TLDR
This paper presents a general method based on an attention-enhanced encoder-decoder model that encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors. Expand
Adversarial Examples for Evaluating Reading Comprehension Systems
TLDR
This work proposes an adversarial evaluation scheme for the Stanford Question Answering Dataset that tests whether systems can answer questions about paragraphs that contain adversarially inserted sentences without changing the correct answer or misleading humans. Expand
Neural Syntactic Preordering for Controlled Paraphrase Generation
TLDR
This work uses syntactic transformations to softly “reorder” the source sentence and guide the neural paraphrasing model, which retains the quality of the baseline approaches while giving a substantial increase in the diversity of the generated paraphrases. Expand
A Survey on Semantic Parsing
TLDR
This survey examines the various components of a semantic parsing system and discusses prominent work ranging from the initial rule based methods to the current neural approaches to program synthesis. Expand
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
TLDR
A RNN model is proposed which can effectively map sentences to action sequences for semantic graph generation and achieves state-of-the-art performance on Overnight dataset and gets competitive performance on Geo and Atis datasets. Expand
Corpora Generation for Grammatical Error Correction
TLDR
It is demonstrated that neural GEC models trained using either type of corpora give similar performance, and systematic analysis is presented that compares the two approaches to data generation and highlights the effectiveness of ensembling. Expand
HotFlip: White-Box Adversarial Examples for NLP
TLDR
This work proposes an efficient method to generate white-box adversarial examples that trick character-level and word-level neural models, and relies on an atomic flip operation, which swaps one token for another, based on the gradients of the one-hot input vectors. Expand
...
1
2
3
4
...