• Corpus ID: 254247336

Unveiling the Black Box of PLMs with Semantic Anchors: Towards Interpretable Neural Semantic Parsing

@inproceedings{Nie2022UnveilingTB,
  title={Unveiling the Black Box of PLMs with Semantic Anchors: Towards Interpretable Neural Semantic Parsing},
  author={Lun Yiu Nie and Jiu Sun and Yanlin Wang and Lun Du and Lei Hou and Juanzi Li and Shi Han and Dongmei Zhang and Jidong Zhai},
  year={2022}
}
The recent prevalence of pretrained language models (PLMs) has dramatically shifted the paradigm of semantic parsing, where the mapping from natural language utterances to struc- tured logical forms is now formulated as a Seq2Seq task. Despite the promising performance, previous PLM-based ap- proaches often suffer from hallucination problems due to their negligence of the structural information contained in the sentence, which essentially constitutes the key semantics of the logical forms… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 62 REFERENCES

Coarse-to-Fine Decoding for Neural Semantic Parsing

This work proposes a structure-aware neural architecture which decomposes the semantic parsing process into two stages, and shows that this approach consistently improves performance, achieving competitive results despite the use of relatively simple decoders.

An Investigation of Language Model Interpretability via Sentence Editing

A sentence editing dataset is re-purpose, where faithful high-quality human rationales can be automatically extracted and compared with extracted model rationales as a new testbed for interpretability, to conduct a systematic investigation on PLMs’ interpretability.

Cross-domain Semantic Parsing via Paraphrasing

By converting logical forms into canonical utterances in natural language, semantic parsing is reduced to paraphrasing, and an attentive sequence-to-sequence paraphrase model is developed that is general and flexible to adapt to different domains.

On The Ingredients of an Effective Zero-shot Semantic Parser

This paper analyzes zero-shot parsers through the lenses of the language and logical gaps (Herzig and Berant, 2019), which quantify the discrepancy of language and programmatic patterns between the canonical examples and real-world user-issued ones.

RetroNLU: Retrieval Augmented Task-Oriented Semantic Parsing

The technique, RetroNLU, extends a sequence-to-sequence model architecture with a retrieval component, which is used to retrieve existing similar samples and present them as an additional context to the model to outperform the baseline method by 1.5% absolute macro-F1.

Translate & Fill: Improving Zero-Shot Multilingual Semantic Parsing with Synthetic Data

Experimental results on three multilingual semantic parsing datasets show that data augmentation with TaF reaches accuracies competitive with similar systems which rely on traditional alignment techniques.

Core Semantic First: A Top-down Approach for AMR Parsing

A novel scheme for parsing a piece of text into its Abstract Meaning Representation (AMR): Graph Spanning based Parsing (GSP), which achieves the state-of-the-art performance in the sense that no heuristic graph re-categorization is adopted.

Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training

This work presents a model pretraining framework, Generation-Augmented Pre-training (GAP), that jointly learns representations of natural language utterance and table schemas, by leveraging generation models to generate high-quality pre-train data.

TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data

TaBERT is a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables that achieves new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider.

GraphQ IR: Unifying Semantic Parsing of Graph Query Language with Intermediate Representation

A natural-language-like representation that bridges the semantic gap and its formally defined syntax that maintains the graph structure, neural semantic parser can more ef-fectively convert user queries into the authors' GraphQ IR, which can be later automatically compiled into different downstream graph query languages.
...