Don’t Parse, Generate! A Sequence to Sequence Architecture for Task-Oriented Semantic Parsing

@article{Rongali2020DontPG,
  title={Don’t Parse, Generate! A Sequence to Sequence Architecture for Task-Oriented Semantic Parsing},
  author={Subendhu Rongali and Luca Soldaini and Emilio Monti and Wael Hamza},
  journal={Proceedings of The Web Conference 2020},
  year={2020}
}
Virtual assistants such as Amazon Alexa, Apple Siri, and Google Assistant often rely on a semantic parsing component to understand which action(s) to execute for an utterance spoken by its users. Traditionally, rule-based or statistical slot-filling systems have been used to parse “simple” queries; that is, queries that contain a single action and can be decomposed into a set of non-overlapping entities. More recently, shift-reduce parsers have been proposed to process more complex utterances… Expand
Don’t Parse, Insert: Multilingual Semantic Parsing with Insertion Based Decoding
TLDR
A non-autoregressive parser which is based on the insertion transformer to overcome these two issues, which speeds up decoding by 3x while outperforming the autoregressive model and significantly improves cross-lingual transfer in the low-resource setting by 37% compared to autore progressive baseline. Expand
Generating Synthetic Data for Task-Oriented Semantic Parsing with Hierarchical Representations
TLDR
This work explores the possibility of generating synthetic data for neural semantic parsing using a pretrained denoising sequence-to-sequence model (i.e., BART) and uses an auxiliary parser (AP) to filter the generated utterances. Expand
Conversational Semantic Parsing
TLDR
A semantic representation for such task-oriented conversational systems that can represent concepts such as co-reference and context carryover, enabling comprehensive understanding of queries in a session is proposed. Expand
Semantic Parsing in Task-Oriented Dialog with Recursive Insertion-based Encoder
TLDR
Recursive INsertion-based Encoder (RINE), a novel approach for semantic parsing in task-oriented dialog, achieves state-ofthe-art exact match accuracy on low and high-resource versions of the conversational semantic parsing benchmark TOP (Gupta et al. 2018). Expand
RETRONLU: Retrieval Augmented Task-Oriented Semantic Parsing
TLDR
This paper extends a sequence-to-sequence model architecture with retrieval component to the problem of multi-domain task-oriented semantic parsing for conversational assistants, and analyzes the nearest neighbor retrieval component’s quality, model sensitivity and break down the performance for semantic parses of different utterance complexity. Expand
X2Parser: Cross-Lingual and Cross-Domain Framework for Task-Oriented Compositional Semantic Parsing
TLDR
This paper presents X2Parser, a transferable Cross-lingual and Cross-domain Parser for TCSP, a fertility-based slot predictor that first learns to detect the number of labels for each token, and then predicts the slot types, and shows that the model can reduce the latency by up to 66% compared to the generation-based model. Expand
Low-Resource Domain Adaptation for Compositional Task-Oriented Semantic Parsing
TLDR
This work identifies two fundamental factors for low-resource domain adaptation: better representation learning and better training techniques, and proposes a novel method that outperforms a supervised neural model at a 10-fold data reduction. Expand
Continual Learning for Neural Semantic Parsing
TLDR
It is demonstrated that a simple approach with a specific fine-tuning procedure for the old model can reduce the computational costs by ~90% compared to the training of a new model. Expand
Span Pointer Networks for Non-Autoregressive Task-Oriented Semantic Parsing
TLDR
This work proposes span pointer networks, non-autoregressive parsers which shift the decoding task from text generation to span prediction; that is, when imputing utterance spans into frame slots, the model produces endpoints as opposed to text, reducing the variability of gold frames and improving length prediction and, ultimately, exact match. Expand
Translate & Fill: Improving Zero-Shot Multilingual Semantic Parsing with Synthetic Data
TLDR
Experimental results on three multilingual semantic parsing datasets show that data augmentation with TaF reaches accuracies competitive with similar systems which rely on traditional alignment techniques. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 43 REFERENCES
Semantic Parsing for Task Oriented Dialog using Hierarchical Representations
TLDR
This work proposes a hierarchical annotation scheme for semantic parsing that allows the representation of compositional queries, and can be efficiently and accurately parsed by standard constituency parsing models. Expand
Exploring Neural Methods for Parsing Discourse Representation Structures
TLDR
This work presents a sequence-to-sequence neural semantic parser that is able to produce Discourse Representation Structures (DRSs) for English sentences with high accuracy, outperforming traditional DRS parsers. Expand
Improving Semantic Parsing for Task Oriented Dialog
TLDR
Three different improvements to the semantic parsing model are presented: contextualized embeddings, ensembling, and pairwise re-ranking based on a language model, which gives a new state-of-the-art result on the Task Oriented Parsing (TOP) dataset. Expand
Language Models are Unsupervised Multitask Learners
TLDR
It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations. Expand
Data Recombination for Neural Semantic Parsing
TLDR
Data recombination improves the accuracy of the RNN model on three semantic parsing datasets, leading to new state-of-the-art performance on the standard GeoQuery dataset for models with comparable supervision. Expand
Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars
TLDR
A learning algorithm is described that takes as input a training set of sentences labeled with expressions in the lambda calculus and induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence. Expand
Multi-Domain Joint Semantic Frame Parsing Using Bi-Directional RNN-LSTM
TLDR
Experimental results show the power of a holistic multi-domain, multi-task modeling approach to estimate complete semantic frames for all user utterances addressed to a conversational system over alternative methods based on single domain/task deep learning. Expand
Distributed Representations of Words and Phrases and their Compositionality
TLDR
This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling. Expand
Pointer Networks
TLDR
A new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence using a recently proposed mechanism of neural attention, called Ptr-Nets, which improves over sequence-to-sequence with input attention, but also allows it to generalize to variable size output dictionaries. Expand
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TLDR
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks. Expand
...
1
2
3
4
5
...