Don’t Parse, Generate! A Sequence to Sequence Architecture for Task-Oriented Semantic Parsing

@article{Rongali2020DontPG,
  title={Don’t Parse, Generate! A Sequence to Sequence Architecture for Task-Oriented Semantic Parsing},
  author={Subendhu Rongali and Luca Soldaini and Emilio Monti and Wael Hamza},
  journal={Proceedings of The Web Conference 2020},
  year={2020}
}
Virtual assistants such as Amazon Alexa, Apple Siri, and Google Assistant often rely on a semantic parsing component to understand which action(s) to execute for an utterance spoken by its users. Traditionally, rule-based or statistical slot-filling systems have been used to parse “simple” queries; that is, queries that contain a single action and can be decomposed into a set of non-overlapping entities. More recently, shift-reduce parsers have been proposed to process more complex utterances… 

Figures and Tables from this paper

Don’t Parse, Insert: Multilingual Semantic Parsing with Insertion Based Decoding

A non-autoregressive parser which is based on the insertion transformer to overcome these two issues, which speeds up decoding by 3x while outperforming the autoregressive model and significantly improves cross-lingual transfer in the low-resource setting by 37% compared to autore progressive baseline.

Shift-Reduce Task-Oriented Semantic Parsing with Stack-Transformers

It is proved that the in-order transition system is the most accurate alternative, outperforming all existing shift-reduce parsers that are not enhanced with re-ranking and surpassing all top-performing sequence-to-sequence baselines.

Lexicon-injected Semantic Parsing for Task-Oriented Dialog

A novel span-splitting representation for span-based parser that out-performs existing methods and a novel lexicon-injected semantic parser, which collects slot labels of tree representation as a lexicon, and injects lexical features to the span representation of parser.

Generating Synthetic Data for Task-Oriented Semantic Parsing with Hierarchical Representations

This work explores the possibility of generating synthetic data for neural semantic parsing using a pretrained denoising sequence-to-sequence model (i.e., BART) and uses an auxiliary parser (AP) to filter the generated utterances.

Conversational Semantic Parsing

A semantic representation for such task-oriented conversational systems that can represent concepts such as co-reference and context carryover, enabling comprehensive understanding of queries in a session is proposed.

Retrieve-and-Fill for Scenario-based Task-Oriented Semantic Parsing

This work introduces scenario-based semantic parsing: a variant of the original task which first requires disambiguating an utterance’s “scenario” before generating its frame, complete with ontology and utterance tokens, which achieves strong results in high-resource, low- resource, and multilingual settings.

Semantic Parsing in Task-Oriented Dialog with Recursive Insertion-based Encoder

A Recursive INsertion-based Encoder (RINE), a novel approach for semantic parsing in task-oriented dialog that achieves state-of-the-art exact match accuracy on low- and high-resource versions of the conversational semantic parsing benchmark TOP, outperforming strong sequence-to-sequence models and transition-based parsers.

RetroNLU: Retrieval Augmented Task-Oriented Semantic Parsing

The technique, RetroNLU, extends a sequence-to-sequence model architecture with a retrieval component, which is used to retrieve existing similar samples and present them as an additional context to the model to outperform the baseline method by 1.5% absolute macro-F1.

Training Naturalized Semantic Parsers with Very Little Data

This paper introduces an automated methodology that delivers very significant additional improvements by utilizing modest amounts of unannotated data, which is typically easy to obtain and shows new SOTA few-shot performance on the Overnight dataset, particularly in very low-resource settings.

X2Parser: Cross-Lingual and Cross-Domain Framework for Task-Oriented Compositional Semantic Parsing

This paper presents X2Parser, a transferable Cross-lingual and Cross-domain Parser for TCSP, a fertility-based slot predictor that first learns to detect the number of labels for each token, and then predicts the slot types, and shows that the model can reduce the latency by up to 66% compared to the generation-based model.
...

References

SHOWING 1-10 OF 36 REFERENCES

Exploring Neural Methods for Parsing Discourse Representation Structures

This work presents a sequence-to-sequence neural semantic parser that is able to produce Discourse Representation Structures (DRSs) for English sentences with high accuracy, outperforming traditional DRS parsers.

Improving Semantic Parsing for Task Oriented Dialog

Three different improvements to the semantic parsing model are presented: contextualized embeddings, ensembling, and pairwise re-ranking based on a language model, which gives a new state-of-the-art result on the Task Oriented Parsing (TOP) dataset.

Language Models are Unsupervised Multitask Learners

It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.

Data Recombination for Neural Semantic Parsing

Data recombination improves the accuracy of the RNN model on three semantic parsing datasets, leading to new state-of-the-art performance on the standard GeoQuery dataset for models with comparable supervision.

Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars

A learning algorithm is described that takes as input a training set of sentences labeled with expressions in the lambda calculus and induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence.

Multi-Domain Joint Semantic Frame Parsing Using Bi-Directional RNN-LSTM

Experimental results show the power of a holistic multi-domain, multi-task modeling approach to estimate complete semantic frames for all user utterances addressed to a conversational system over alternative methods based on single domain/task deep learning.

Distributed Representations of Words and Phrases and their Compositionality

This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.

Pointer Networks

A new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence using a recently proposed mechanism of neural attention, called Ptr-Nets, which improves over sequence-to-sequence with input attention, but also allows it to generalize to variable size output dictionaries.

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.

Get To The Point: Summarization with Pointer-Generator Networks

A novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways, using a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator.