KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation

@inproceedings{Chen2020KGPTKP,
  title={KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation},
  author={Wenhu Chen and Yu Su and Xifeng Yan and William Yang Wang},
  booktitle={Conference on Empirical Methods in Natural Language Processing},
  year={2020}
}
Data-to-text generation has recently attracted substantial interests due to its wide applications. Existing methods have shown impressive performance on an array of tasks. However, they rely on a significant amount of labeled data for each task, which is costly to acquire and thus limits their application to new tasks and domains. In this paper, we propose to leverage pre-training and transfer learning to address this issue. We propose a knowledge-grounded pre-training (KGPT), which consists of… 

Curriculum-Based Self-Training Makes Better Few-Shot Learners for Data-to-Text Generation

This work proposes a novel method called Curriculum-Based Self-Training (CBST), which can outperform fine-tuning and task-adaptive pre-training methods, and achieve state-of-the-art performance in the few-shot setting of data-to-text generation.

GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation

It is demonstrated that by fusing graph-aware elements into existing pre-trained language models, this work is able to outperform state-of-the-art models and close the gap imposed by additional pre-training tasks.

ASDOT: Any-Shot Data-to-Text Generation with Pretrained Language Models

Experimental results show that A SDOT consistently achieves signif-icant improvement over baselines, e.g., a 30.81 BLEU gain on the DART dataset under the zero-shot set, and generalization to unseen predicates and out-of-domain data.

Neural Pipeline for Zero-Shot Data-to-Text Generation

This work proposes to generate text by transforming single-item descriptions with a sequence of modules trained on general-domain text-based operations: ordering, aggregation, and paragraph compression on a synthetic corpus WikiFluent which is built from English Wikipedia.

Lexically-constrained Text Generation through Commonsense Knowledge Extraction and Injection

This paper explores how commonsense knowledge graphs can enhance model performance, with respect to commonsense reasoning and lexically-constrained decoding, and proposes strategies for enhancing the semantic correctness of the generated text.

LOGEN: Few-shot Logical Knowledge-Conditioned Text Generation with Self-training

Experimental results demonstrate that the proposed unified framework for logical knowledge-conditioned text generation in the few- shot setting can obtain better few-shot performance than baselines.

Few-Shot Table-to-Text Generation with Prototype Memory

Experimental results on three benchmark datasets with three state-of-the-art models demonstrate that the proposed framework significantly improves the model performance across various evaluation metrics.

JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs

A graph-text joint representation learning model called JointGT is proposed which devise a structure-aware semantic aggregation module which is plugged into each Transformer layer to preserve the graph structure during encoding and shows new state-of-the-art performance on various KG-to-text datasets.

Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training

It is shown that verbalizing a comprehensive, encyclopedic KG like Wikidata can be used to integrate structured KGs and natural language corpora and carries the further advantages of improved factual accuracy and reduced toxicity in the resulting language model.

MVP: Multi-task Supervised Pre-training for Natural Language Generation

This work collects a large-scale natural language generation corpus, MVPCorpus, from 77 datasets over 11 diverse NLG tasks, and unifies these examples into a general text-to-text format to pre-train the text generation model MVP in a supervised manner.
...

References

SHOWING 1-10 OF 57 REFERENCES

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.

Enhancing Neural Data-To-Text Generation Models with External Background Knowledge

This paper enhances neural data-to-text models with external knowledge in a simple but effective way to improve the fidelity of generated text by attends to relevant external knowledge, encoded as a temporary memory, and combines this knowledge with the context representation of data before generating words.

Few-Shot NLG with Pre-Trained Language Model

This work proposes the new task of few-shot natural language generation and proposes a simple yet effective approach that achieves very reasonable performances and outperforms the strongest baseline by an average of over 8.0 BLEU points improvement.

Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation

A novel model is proposed to separate the generation of table-to-text generation into two stages: key fact prediction and surface realization, which needs much fewer annotated data and can be trained with pseudo parallel corpus.

Order-Planning Neural Text Generation From Structured Data

This paper proposes an order-planning text generation model, where order information is explicitly captured by link-based attention and a self-adaptive gate combines the link- based attention with traditional content-based Attention.

Variational Template Machine for Data-to-Text Generation

This paper proposes the variational template machine (VTM), a novel method to generate text descriptions from data tables, and utilizes both small parallel data and large raw text without aligned tables to enrich the template learning.

Challenges in Data-to-Document Generation

A new, large-scale corpus of data records paired with descriptive documents is introduced, a series of extractive evaluation methods for analyzing performance are proposed, and baseline results are obtained using current neural generation methods.

Hierarchical Encoder with Auxiliary Supervision for Neural Table-to-Text Generation: Learning Better Representation for Tables

A two-level hierarchical encoder with coarse-to-fine attention to handle the attribute-value structure of the tables, and proposes 3 joint tasks apart from the prime encoder-decoder learning, namely auxiliary sequence labeling task, text autoencoder and multi-labeling classification, as the auxiliary supervisions for the table encoder.

ToTTo: A Controlled Table-To-Text Generation Dataset

We present ToTTo, an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table

Table-to-text Generation by Structure-aware Seq2seq Learning

The attention visualizations and case studies show that the novel structure-aware seq2seq architecture which consists of field-gating encoder and description generator with dual attention is capable of generating coherent and informative descriptions based on the comprehensive understanding of both the content and the structure of a table.
...