A Graph-to-Sequence Model for AMR-to-Text Generation

@inproceedings{Song2018AGM,
  title={A Graph-to-Sequence Model for AMR-to-Text Generation},
  author={Linfeng Song and Yue Zhang and Zhiguo Wang and Daniel Gildea},
  booktitle={ACL},
  year={2018}
}
The problem of AMR-to-text generation is to recover a text representing the same meaning as an input AMR graph. [] Key Method We introduce a neural graph-to-sequence model, using a novel LSTM structure for directly encoding graph-level semantics. On a standard benchmark, our model shows superior results to existing methods in the literature.

Figures and Tables from this paper

AMR-To-Text Generation with Graph Transformer
TLDR
This paper proposes a novel graph-to-sequence model (Graph Transformer) that directly encodes the AMR graphs and learns the node representations and outperforms the state-of-the-art neural approach.
Modeling Graph Structure in Transformer for Better AMR-to-Text Generation
TLDR
This paper proposes a novel structure-aware self-attention approach to better model the relations between indirectly connected concepts in the state-of-the-art seq2seq model, i.e. the Transformer.
GPT-too: A Language-Model-First Approach for AMR-to-Text Generation
TLDR
An alternative approach that combines a strong pre-trained language model with cycle consistency-based re-scoring is proposed that outperform all previous techniques on the English LDC2017T10 dataset, including the recent use of transformer architectures.
Structural Neural Encoders for AMR-to-text Generation
TLDR
The extent to which reentrancies (nodes with multiple parents) have an impact on AMR-to-text generation is investigated by comparing graph encoder to tree encoders, where reENTrancies are not preserved.
Structural Neural Encoders for AMR-to-text Generation
TLDR
The extent to which reentrancies (nodes with multiple parents) have an impact on AMR-to-text generation is investigated by comparing graph encoder to tree encoders, where reENTrancies are not preserved.
Enhancing AMR-to-Text Generation with Dual Graph Representations
TLDR
A novel graph-to-sequence model that encodes different but complementary perspectives of the structural information contained in the AMR graph, learning parallel top-down and bottom-up representations of nodes capturing contrasting views of the graph.
Structural Information Preserving for Graph-to-Text Generation
TLDR
This work introduces two types of autoencoding losses, each individually focusing on different aspects (a.k.a. views) of input graphs, that can guide the model for preserving input information via multi-task training.
Structural Adapters in Pretrained Language Models for AMR-to-Text Generation
TLDR
The benefits of explicitly encoding graph structure into PLMs using StructAdapt are empirically shown, outperforming the state of the art on two AMR-to-text datasets, training only 5.1% of the PLM parameters.
A Survey : Neural Networks for AMR-to-Text
TLDR
The neural network-based method is detailed and the latest progress of AMR-to-Text, which refers to AMR reconstruction, Decoder optimization, etc, is presented and a summary of current techniques and the outlook for future research is provided.
Graph Pre-training for AMR Parsing and Generation
TLDR
This work investigates graph self-supervised training to improve the structure awareness of PLMs over AMR graphs and introduces two graph auto-encoding strategies for graph-to-graph pre-training and four tasks to integrate text and graph information during pre- training.
...
...

References

SHOWING 1-10 OF 34 REFERENCES
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
TLDR
This work presents a novel training procedure that can lift the limitation of the relatively limited amount of labeled data and the non-sequential nature of the AMR graphs, and presents strong evidence that sequence-based AMR models are robust against ordering variations of graph-to-sequence conversions.
Get To The Point: Summarization with Pointer-Generator Networks
TLDR
A novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways, using a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator.
Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks
TLDR
This work introduces a novel general end-to-end graph- to-sequence neural encoder-decoder model that maps an input graph to a sequence of vectors and uses an attention-based LSTM method to decode the target sequence from these vectors.
Encoding Sentences with Graph Convolutional Networks for Semantic Role Labeling
TLDR
A version of graph convolutional networks (GCNs), a recent class of neural networks operating on graphs, suited to model syntactic dependency graphs, is proposed, observing that GCN layers are complementary to LSTM ones.
RIGOTRIO at SemEval-2017 Task 9: Combining Machine Learning and Grammar Engineering for AMR Parsing and Generation
TLDR
This work strengthens the interlingual aspect of AMR by applying the multilingual Grammatical Framework (GF) for AMR-to-text generation and combined it with state-of-the-art JAMR Generator to see if the combination increases or decreases the overall performance.
Modeling Coverage for Neural Machine Translation
TLDR
This paper proposes coverage-based NMT, which maintains a coverage vector to keep track of the attention history and improves both translation quality and alignment quality over standard attention- based NMT.
Sequence to Sequence Learning with Neural Networks
TLDR
This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
Leveraging Context Information for Natural Question Generation
TLDR
This work proposes a model that matches the answer with the passage before generating the question and shows that this model outperforms the existing state of the art using rich features.
Incorporating Copying Mechanism in Sequence-to-Sequence Learning
TLDR
This paper incorporates copying into neural network-based Seq2Seq learning and proposes a new model called CopyNet with encoder-decoder structure which can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose sub-sequences in the input sequence and put them at proper places in the output sequence.
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
TLDR
The Tree-LSTM is introduced, a generalization of LSTMs to tree-structured network topologies that outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences and sentiment classification.
...
...