Corpus ID: 52128879

Zero-shot Transfer Learning for Semantic Parsing

@article{Dadashkarimi2018ZeroshotTL,
  title={Zero-shot Transfer Learning for Semantic Parsing},
  author={Javid Dadashkarimi and Alexander R. Fabbri and S. Tatikonda and Dragomir R. Radev},
  journal={ArXiv},
  year={2018},
  volume={abs/1808.09889}
}
While neural networks have shown impressive performance on large datasets, applying these models to tasks where little data is available remains a challenging problem. In this paper we propose to use feature transfer in a zero-shot experimental setting on the task of semantic parsing. We first introduce a new method for learning the shared space between multiple domains based on the prediction of the domain label for each example. Our experiments support the superiority of this method in a… Expand
Few-Shot Semantic Parsing for New Predicates
TLDR
This work proposed to apply a designated meta-learning method to train the model, regularize attention scores with alignment statistics, and apply a smoothing technique in pretraining, which consistently outperforms all the baselines in both one and two-shot settings. Expand
Look-up and Adapt: A One-shot Semantic Parser
TLDR
A semantic parser that generalizes to out-of-domain examples by learning a general strategy for parsing an unseen utterance through adapting the logical forms of seen utterances, instead of learning to generate a logical form from scratch. Expand
Conversational Learning
Although machine learning has been highly successful in recent years, this success has been based on algorithms that exhibit only one of the multiple learning paradigms used by humans: learningExpand

References

SHOWING 1-10 OF 29 REFERENCES
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. Expand
Zero-Shot Learning—A Comprehensive Evaluation of the Good, the Bad and the Ugly
TLDR
A new zero-shot learning dataset is proposed, the Animals with Attributes 2 (AWA2) dataset which is made publicly available both in terms of image features and the images themselves and compares and analyzes a significant number of the state-of-the-art methods in depth. Expand
An embarrassingly simple approach to zero-shot learning
TLDR
This paper describes a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets. Expand
Transfer Learning for Neural Semantic Parsing
TLDR
This paper proposes using sequence-to-sequence in a multi- task setup for semantic parsing with focus on transfer learning and shows that the multi-task setup aids transfer learning from an auxiliary task with large labeled data to the target task with smaller labeled data. Expand
Optimization as a Model for Few-Shot Learning
Cross-domain Semantic Parsing via Paraphrasing
TLDR
By converting logical forms into canonical utterances in natural language, semantic parsing is reduced to paraphrasing, and an attentive sequence-to-sequence paraphrase model is developed that is general and flexible to adapt to different domains. Expand
Neural Semantic Parsing over Multiple Knowledge-bases
TLDR
This paper finds that it can substantially improve parsing accuracy by training a single sequence-to-sequence model over multiple KBs, when providing an encoding of the domain at decoding time. Expand
Data Recombination for Neural Semantic Parsing
TLDR
Data recombination improves the accuracy of the RNN model on three semantic parsing datasets, leading to new state-of-the-art performance on the standard GeoQuery dataset for models with comparable supervision. Expand
Frustratingly Easy Semi-Supervised Domain Adaptation
TLDR
This work builds on the notion of augmented space and harnesses unlabeled data in target domain to ameliorate the transfer of information from source to target and can be applied as a pre-processing step to any supervised learner. Expand
Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation
TLDR
This work proposes a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages using a shared wordpiece vocabulary, and introduces an artificial token at the beginning of the input sentence to specify the required target language. Expand
...
1
2
3
...