Structural Information Preserving for Graph-to-Text Generation

@article{Song2020StructuralIP,
  title={Structural Information Preserving for Graph-to-Text Generation},
  author={Linfeng Song and Ante Wang and Jinsong Su and Yue Zhang and Kun Xu and Yubin Ge and Dong Yu},
  journal={ArXiv},
  year={2020},
  volume={abs/2102.06749}
}
The task of graph-to-text generation aims at producing sentences that preserve the meaning of input graphs. As a crucial defect, the current state-of-the-art models may mess up or even drop the core structural information of input graphs when generating outputs. We propose to tackle this problem by leveraging richer training signals that can guide our model for preserving input information. In particular, we introduce two types of autoencoding losses, each individually focusing on different… 

Figures and Tables from this paper

JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs
TLDR
A graph-text joint representation learning model called JointGT is proposed which devise a structure-aware semantic aggregation module which is plugged into each Transformer layer to preserve the graph structure during encoding and shows new state-of-the-art performance on various KG-to-text datasets.
Investigating Pretrained Language Models for Graph-to-Text Generation
TLDR
It is suggested that the PLMs benefit from similar facts seen during pretraining or fine-tuning, such that they perform well even when the input graph is reduced to a simple bag of node and edge labels.
LOGEN: Few-shot Logical Knowledge-Conditioned Text Generation with Self-training
TLDR
This paper proposes a unified framework for logical knowledge-conditioned text generation in the few-shot setting with only a few seeds logical forms and samples pseudo logical forms based on content and structure consistency.
Generalized Shortest-Paths Encoders for AMR-to-Text Generation
TLDR
This work widens the receptive field of a graph encoder by exposing it to all possible graph paths, which affects performance across levels of AMR connectivity, and adopts recent efforts of applying Transformer self-attention to graphs to allow global feature propagation.
One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline
TLDR
This paper casts Text-toAMR and AMR-to-Text as a symmetric transduction task and shows that by devising a careful graph linearization and extending a pretrained encoder-decoder model, it is possible to obtain state-of-the-art performances in both tasks using the very same seq2seq approach, i.e., SPRING (Symmetric PaRsIng aNd Generation).
Variational Graph Autoencoding as Cheap Supervision for AMR Coreference Resolution
TLDR
This work proposes a general pretraining method using variational graph autoencoder (VGAE) for AMR coreference resolution, which can leverage any general AMR corpus and even automatically parsed AMR data.
DART: Open-Domain Structured Data Record to Text Generation
TLDR
The dataset construction framework effectively merged heterogeneous sources from open domain semantic parsing and spoken dialogue systems by utilizing techniques including tree ontology annotation, question-answer pair to declarative sentence conversion, and predicate unification, all with minimum post-editing.
Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction
TLDR
Experimental results show that, by uniformly modeling all tasks in a single model and universally predicting different labels, the Text2Event method can achieve competitive performance using only record-level annotations in both supervised learning and transfer learning settings.
Constrained Text Generation with Global Guidance - Case Study on CommonGen
TLDR
This paper considers using reinforcement learning to address the limitation of constrained text generation, measuring global constraints including fluency, common sense and concept coverage with a comprehensive score, which serves as the reward for reinforcement learning.
A Survey : Neural Networks for AMR-to-Text
TLDR
The neural network-based method is detailed and the latest progress of AMR-to-Text, which refers to AMR reconstruction, Decoder optimization, etc, is presented and a summary of current techniques and the outlook for future research is provided.
...
...

References

SHOWING 1-10 OF 59 REFERENCES
Enhancing AMR-to-Text Generation with Dual Graph Representations
TLDR
A novel graph-to-sequence model that encodes different but complementary perspectives of the structural information contained in the AMR graph, learning parallel top-down and bottom-up representations of nodes capturing contrasting views of the graph.
A Graph-to-Sequence Model for AMR-to-Text Generation
TLDR
This work introduces a neural graph-to-sequence model, using a novel LSTM structure for directly encoding graph-level semantics, and shows superior results to existing methods in the literature.
AMR-To-Text Generation with Graph Transformer
TLDR
This paper proposes a novel graph-to-sequence model (Graph Transformer) that directly encodes the AMR graphs and learns the node representations and outperforms the state-of-the-art neural approach.
Modeling Graph Structure in Transformer for Better AMR-to-Text Generation
TLDR
This paper proposes a novel structure-aware self-attention approach to better model the relations between indirectly connected concepts in the state-of-the-art seq2seq model, i.e. the Transformer.
Text Generation from Knowledge Graphs with Graph Transformers
TLDR
This work addresses the problem of generating coherent multi-sentence texts from the output of an information extraction system, and in particular a knowledge graph by introducing a novel graph transforming encoder which can leverage the relational structure of such knowledge graphs without imposing linearization or hierarchical constraints.
SQL-to-Text Generation with Graph-to-Sequence Model
TLDR
This paper proposes a graph-to-sequence model to encode the global structure information into node embeddings that can effectively learn the correlation between the SQL query pattern and its interpretation.
Graph-to-Sequence Learning using Gated Graph Neural Networks
TLDR
This work proposes a new model that encodes the full structural information contained in the graph, couples the recently proposed Gated Graph Neural Networks with an input transformation that allows nodes and edges to have their own hidden representations, while tackling the parameter explosion problem present in previous work.
GPT-too: A Language-Model-First Approach for AMR-to-Text Generation
TLDR
An alternative approach that combines a strong pre-trained language model with cycle consistency-based re-scoring is proposed that outperform all previous techniques on the English LDC2017T10 dataset, including the recent use of transformer architectures.
Structural Neural Encoders for AMR-to-text Generation
TLDR
The extent to which reentrancies (nodes with multiple parents) have an impact on AMR-to-text generation is investigated by comparing graph encoder to tree encoders, where reENTrancies are not preserved.
Improving Language Generation from Feature-Rich Tree-Structured Data with Relational Graph Convolutional Encoders
TLDR
The core innovation in this approach is to use a graph convolutional network to encode the dependency trees given as input to achieve the third rank without using data augmentation techniques or additional components (such as a re-ranker).
...
...