Pushing the Limits of AMR Parsing with Self-Learning

@inproceedings{Lee2020PushingTL,
  title={Pushing the Limits of AMR Parsing with Self-Learning},
  author={Young-suk Lee and Ram{\'o}n Fern{\'a}ndez Astudillo and Tahira Naseem and Revanth Reddy Gangi Reddy and Radu Florian and Salim Roukos},
  booktitle={FINDINGS},
  year={2020}
}
Abstract Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR. At the same time, self-learning techniques have helped push the performance boundaries of other natural language processing applications, such as machine translation or question answering. In this paper, we explore different ways in which trained models can be applied to… 

Figures and Tables from this paper

Maximum Bayes Smatch Ensemble Distillation for AMR Parsing
TLDR
This paper proposes to overcome diminishing returns of silver data by combining Smatch-based ensembling techniques with ensemble distillation, and shows that it can produce gains rivaling those of human annotated data for QALD-9 and achieve a new state-of-the-art for BioAMR.
ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs
TLDR
This work proposes a principled method to involve auxiliary tasks to boost AMR parsing and shows that this method achieves new state-of-the-art performance on different benchmarks especially in topology-related scores.
Inducing and Using Alignments for Transition-based AMR Parsing
TLDR
A neural aligner for AMR is proposed that learns node-to-word alignments without relying on complex pipelines, and a tighter integration of aligner and parser training is explored by considering a distribution over oracle action sequences arising from aligner uncertainty.
Levi Graph AMR Parser using Heterogeneous Attention
TLDR
A novel approach to AMR parsing is presented by combining heterogeneous data as one input to a transformer to learn attention, and use only attention matrices from the transformer to predict all elements in AMR graphs.
Hierarchical Curriculum Learning for AMR Parsing
TLDR
A Hierarchical Curriculum Learning (HCL) framework with Structure-level (SC) and Instance-level Curricula (IC) that reduces the difficulty of learning complex structures, thus the flat model can better adapt to the AMR hierarchy.
Making Better Use of Bilingual Information for Cross-Lingual AMR Parsing
TLDR
This work introduces bilingual input, namely the translated texts as well as non-English texts, in order to enable the model to predict more accurate concepts, and introduces an auxiliary task, requiring the decoder to predict the English sequences at the same time.
Ensembling Graph Predictions for AMR Parsing
TLDR
The experimental results demonstrate that the proposed approach can combine the strength of state-of-the-art AMR parsers to create new predictions that are more accurate than any individual models in five standard benchmark datasets.
Bootstrapping Multilingual AMR with Contextual Word Alignments
TLDR
A novel technique for foreign-text-to-English AMR alignment, using the contextual word alignment between English and foreign language tokens, which achieves a highly competitive performance that surpasses the best published results for German, Italian, Spanish and Chinese.
Structure-aware Fine-tuning of Sequence-to-sequence Transformers for Transition-based AMR Parsing
TLDR
This work depart from a pointer-based transition system and proposes a simplified transition set, designed to better exploit pre-trained language models for structured fine-tuning, that retains the desirable properties of previous transition-based approaches, while being simpler and reaching the new parsing state of the art for AMR 2.0.
AMR Parsing with Action-Pointer Transformer
TLDR
This work proposes a transition-based system that combines hard-attention over sentences with a target-side action pointer mechanism to decouple source tokens from node representations and address alignments and shows that the action-pointer approach leads to increased expressiveness and attains large gains against the best transition- based AMR parser in very similar conditions.
...
...

References

SHOWING 1-10 OF 25 REFERENCES
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
TLDR
This work presents a novel training procedure that can lift the limitation of the relatively limited amount of labeled data and the non-sequential nature of the AMR graphs, and presents strong evidence that sequence-based AMR models are robust against ordering variations of graph-to-sequence conversions.
AMR Parsing as Sequence-to-Graph Transduction
TLDR
This work proposes an attention-based model that treats AMR parsing as sequence-to-graph transduction, and it can be effectively trained with limited amounts of labeled AMR data.
GPT-too: A Language-Model-First Approach for AMR-to-Text Generation
TLDR
An alternative approach that combines a strong pre-trained language model with cycle consistency-based re-scoring is proposed that outperform all previous techniques on the English LDC2017T10 dataset, including the recent use of transformer architectures.
Rewarding Smatch: Transition-Based AMR Parsing with Reinforcement Learning
TLDR
This work involves enriching the Stack-LSTM transition-based AMR parser by augmenting training with Policy Learning and rewarding the Smatch score of sampled graphs and shows an in-depth study ablating each of the new components of the parser.
Neural Semantic Parsing by Character-based Translation: Experiments with Abstract Meaning Representations
TLDR
Five different approaches are examined to improve the character-level translation method for neural semantic parsing on a large corpus of sentences annotated with Abstract Meaning Representations (AMRs), leading to an F-score of 71.0 on holdout data, which is state-of-the-art in AMR parsing.
Synthetic QA Corpora Generation with Roundtrip Consistency
TLDR
A novel method of generating synthetic question answering corpora is introduced by combining models of question generation and answer extraction, and by filtering the results to ensure roundtrip consistency, establishing a new state-of-the-art on SQuAD2 and NQ.
AMR Parsing using Stack-LSTMs
TLDR
A transition-based AMR parser that directly generates AMR parses from plain text using Stack-LSTMs to represent the parser state and make decisions greedily is presented.
AMR Parsing as Graph Prediction with Latent Alignment
TLDR
A neural parser is introduced which treats alignments as latent variables within a joint probabilistic model of concepts, relations and alignments and shows that joint modeling is preferable to using a pipeline of align and parse.
An Incremental Parser for Abstract Meaning Representation
TLDR
A transition-based parser for AMR that parses sentences left-to-right, in linear time is described and it is shown that this parser is competitive with the state of the art on the LDC2015E86 dataset and that it outperforms state-of-the-art parsers for recovering named entities and handling polarity.
Language Models are Unsupervised Multitask Learners
TLDR
It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
...
...