Corpus ID: 218869898

AMR Quality Rating with a Lightweight CNN

@inproceedings{Opitz2020AMRQR,
  title={AMR Quality Rating with a Lightweight CNN},
  author={Juri Opitz},
  booktitle={AACL},
  year={2020}
}
  • J. Opitz
  • Published in AACL 25 May 2020
  • Computer Science
Structured semantic sentence representations such as Abstract Meaning Representations (AMRs) are potentially useful in various NLP tasks. However, the quality of automatic parses can vary greatly and jeopardizes their usefulness. This can be mitigated by models that can accurately rate AMR quality in the absence of costly gold data, allowing us to inform downstream systems about an incorporated parse’s trustworthiness or select among different candidate parses. In this work, we propose to… Expand
Towards a Decomposable Metric for Explainable Evaluation of Text Generation from AMR
Systems that generate sentences from (abstract) meaning representations (AMRs) are typically evaluated using automatic surface matching metrics that compare the generated texts to the texts that wereExpand
Towards a Decomposable Metric for Explainable Evaluation of Text Generation from AMR
TLDR
This work proposes \mathcal{M}_\beta, a decomposable metric that builds on two pillars that measures the linguistic quality of the generated text, and shows that fulfillment of both principles offers benefits for AMR-to-text evaluation, including explainability of scores. Expand
Weisfeiler-Leman in the BAMBOO: Novel AMR Graph Metrics and a Benchmark for AMR Graph Similarity
TLDR
A Benchmark for AMR Metrics based on Overt Objectives (BAMBOO), the first benchmark to support empirical assessment of graph-based MR similarity metrics, is introduced and results indicate that the novel metrics may serve as a strong baseline for future work. Expand

References

SHOWING 1-10 OF 83 REFERENCES
Semantic Neural Machine Translation Using AMR
TLDR
Experiments show that incorporating AMR (abstract meaning representation) as additional knowledge can significantly improve a strong attention-based sequence-to-sequence neural translation model. Expand
GPT-too: A Language-Model-First Approach for AMR-to-Text Generation
TLDR
An alternative approach that combines a strong pre-trained language model with cycle consistency-based re-scoring is proposed that outperform all previous techniques on the English LDC2017T10 dataset, including the recent use of transformer architectures. Expand
AMR Parsing as Graph Prediction with Latent Alignment
TLDR
A neural parser is introduced which treats alignments as latent variables within a joint probabilistic model of concepts, relations and alignments and shows that joint modeling is preferable to using a pipeline of align and parse. Expand
Automatic Quality Estimation for Natural Language Generation: Ranting (Jointly Rating and Ranking)
We present a recurrent neural network based system for automatic quality estimation of natural language generation (NLG) outputs, which jointly learns to assign numerical ratings to individualExpand
Pushing the Limits of Translation Quality Estimation
TLDR
A new, carefully engineered, neural model is stacked into a rich feature-based word-level quality estimation system and the output of an automatic post-editing system is used as an extra feature, obtaining striking results on WMT16. Expand
A Graph-to-Sequence Model for AMR-to-Text Generation
TLDR
This work introduces a neural graph-to-sequence model, using a novel LSTM structure for directly encoding graph-level semantics, and shows superior results to existing methods in the literature. Expand
Automatic Accuracy Prediction for AMR Parsing
TLDR
A neural end-to-end multi-output regression model is developed and the model’s capacity of predicting AMR parse accuracies is evaluated and whether it can reliably assign high scores to gold parses is tested. Expand
AMR Parsing as Sequence-to-Graph Transduction
TLDR
This work proposes an attention-based model that treats AMR parsing as sequence-to-graph transduction, and it can be effectively trained with limited amounts of labeled AMR data. Expand
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
TLDR
This work presents a novel training procedure that can lift the limitation of the relatively limited amount of labeled data and the non-sequential nature of the AMR graphs, and presents strong evidence that sequence-based AMR models are robust against ordering variations of graph-to-sequence conversions. Expand
An Incremental Parser for Abstract Meaning Representation
TLDR
A transition-based parser for AMR that parses sentences left-to-right, in linear time is described and it is shown that this parser is competitive with the state of the art on the LDC2015E86 dataset and that it outperforms state-of-the-art parsers for recovering named entities and handling polarity. Expand
...
1
2
3
4
5
...