Meta-Learning a Cross-lingual Manifold for Semantic Parsing
@article{Sherborne2022MetaLearningAC, title={Meta-Learning a Cross-lingual Manifold for Semantic Parsing}, author={Tom Sherborne and Mirella Lapata}, journal={Transactions of the Association for Computational Linguistics}, year={2022}, volume={11}, pages={49-67} }
Abstract Localizing a semantic parser to support new languages requires effective cross-lingual generalization. Recent work has found success with machine-translation or zero-shot methods, although these approaches can struggle to model how native speakers ask questions. We consider how to effectively leverage minimal annotated examples in new languages for few-shot cross-lingual semantic parsing. We introduce a first-order meta-learning algorithm to train a semantic parser with maximal sample…
2 Citations
QAmeleon: Multilingual QA with Only 5 Examples
- Computer ScienceArXiv
- 2022
This approach uses a PLM to automatically generate multilingual data upon which QA models are trained, thus avoiding costly annotation and shows that few-shot prompt tuning for data synthesis scales across languages and is a viable alternative to large-scale annotation.
XRICL: Cross-lingual Retrieval-Augmented In-Context Learning for Cross-lingual Text-to-SQL Semantic Parsing
- Computer ScienceArXiv
- 2022
This work introduces the XRICL framework, which learns to retrieve relevant English exemplars for a given query to construct prompts and effectively leverages large pre-trained language models to outperform existing baselines.
References
SHOWING 1-10 OF 64 REFERENCES
XGLUE: A New Benchmark Datasetfor Cross-lingual Pre-training, Understanding and Generation
- Computer ScienceEMNLP
- 2020
A recent cross-lingual pre-trained model Unicoder is extended to cover both understanding and generation tasks, which is evaluated on XGLUE as a strong baseline and the base versions of Multilingual BERT, XLM and XLM-R are evaluated for comparison.
Learning to Generalize: Meta-Learning for Domain Generalization
- Computer ScienceAAAI
- 2018
A novel meta-learning method for domain generalization that trains models with good generalization ability to novel domains and achieves state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.
Translate & Fill: Improving Zero-Shot Multilingual Semantic Parsing with Synthetic Data
- Computer ScienceEMNLP
- 2021
Experimental results on three multilingual semantic parsing datasets show that data augmentation with TaF reaches accuracies competitive with similar systems which rely on traditional alignment techniques.
On First-Order Meta-Learning Algorithms
- Computer ScienceArXiv
- 2018
A family of algorithms for learning a parameter initialization that can be fine-tuned quickly on a new task, using only first-order derivatives for the meta-learning updates, including Reptile, which works by repeatedly sampling a task, training on it, and moving the initialization towards the trained weights on that task.
End-to-End Slot Alignment and Recognition for Cross-Lingual NLU
- Computer Science, LinguisticsEMNLP
- 2020
This work proposes a novel end-to-end model that learns to align and predict slots in a multilingual NLU system and uses the corpus to explore various cross-lingual transfer methods focusing on the zero-shot setting and leveraging MT for language expansion.
The ATIS Spoken Language Systems Pilot Corpus
- Linguistics, Computer ScienceHLT
- 1990
This pilot marks the first full-scale attempt to collect a corpus to measure progress in Spoken Language Systems that include both a speech and natural language component and provides guidelines for future efforts.
Zero-Shot Cross-lingual Semantic Parsing
- Computer ScienceACL
- 2022
This work proposes a multi-task encoder-decoder model to transfer parsing knowledge to additional languages using only English-logical form paired data and in-domain natural language corpora in each new language.
Beyond Reptile: Meta-Learned Dot-Product Maximization between Gradients for Improved Single-Task Regularization
- Computer ScienceEMNLP
- 2021
This paper proposes to use the finite differencesfirst-order algorithm to calculate this gradient from dot-product of gradients, al-lowing explicit control on the weightage of this component relative to standard gradients as a regularization tech-nique, leading to more aligned gradients between different batches.
Frustratingly Simple but Surprisingly Strong: Using Language-Independent Features for Zero-shot Cross-lingual Semantic Parsing
- Computer Science, LinguisticsEMNLP
- 2021
Extensive experiments show that despite its simplicity, adding Universal Dependency (UD) relations and Universal POS tags (UPOS) as model-agnostic features achieves surprisingly strong improvement on all parsers.
PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models
- Computer ScienceEMNLP
- 2021
On the challenging Spider and CoSQL text-to-SQL translation tasks, it is shown that PICARD transforms fine-tuned T5 models with passable performance into state-of-the-art solutions.