SQL-to-Text Generation with Graph-to-Sequence Model

@inproceedings{Xu2018SQLtoTextGW,
  title={SQL-to-Text Generation with Graph-to-Sequence Model},
  author={Kun Xu and Lingfei Wu and Zhiguo Wang and Mo Yu and Liwei Chen and Vadim Sheinin},
  booktitle={EMNLP},
  year={2018}
}
Previous work approaches the SQL-to-text generation task using vanilla Seq2Seq models, which may not fully capture the inherent graph-structured information in SQL query. [] Key Method This model can effectively learn the correlation between the SQL query pattern and its interpretation. Experimental results on the WikiSQL dataset and Stackoverflow dataset show that our model significantly outperforms the Seq2Seq and Tree2Seq baselines, achieving the state-of-the-art performance.

Figures and Tables from this paper

Relation-Aware Graph Transformer for SQL-to-Text Generation

This work focuses on SQL-to-text, a task that maps a SQL query into the corresponding natural language question, and proposes a relation-aware graph transformer (RGT) to consider both the SQL structure and various relations simultaneously.

Structural Information Preserving for Graph-to-Text Generation

This work introduces two types of autoencoding losses, each individually focusing on different aspects (a.k.a. views) of input graphs, that can guide the model for preserving input information via multi-task training.

Let the Database Talk Back: Natural Language Explanations for SQL

This paper tackles the problem of SQL-to-NL by extending the graph-based model of Logos by making improvements in terms of the system’s translation capabilities and the fluency of the generated explanations.

Graph-to-Tree Neural Networks for Learning Structured Input-Output Translation with Applications to Semantic Parsing and Math Word Problem

This paper presents a novel Graph-to-Tree Neural Networks, namely Graph2Tree consisting of a graph encoder and a hierarchical tree decoder, that encodes an augmented graph-structured input and decodes a tree- Structured output.

Toward Subgraph Guided Knowledge Graph Question Generation with Graph Neural Networks

This work proposes to apply a bidirectional Graph2Seq model to encode the KG subgraph, and enhances the RNN decoder with node-level copying mechanism to allow directly copying node attributes from the input graph to the output question.

Improving a Graph-to-Tree Model for Solving Math Word Problems

An improved version of Graph2Tree which considers the characteristics of natural language to understand the word problems and introduces question embedding for the tree-based decoder to generate equation based on the question given as input is proposed.

Improving Language Generation from Feature-Rich Tree-Structured Data with Relational Graph Convolutional Encoders

The core innovation in this approach is to use a graph convolutional network to encode the dependency trees given as input to achieve the third rank without using data augmentation techniques or additional components (such as a re-ranker).

Automatic Code Summarization via Multi-dimensional Semantic Fusing in GNN

This paper proposes a retrieval-augmented mechanism to augment source code semantics with external knowledge to better learn semantics from the joint graph, and proposes a novel attention-based dynamic graph to capture global interactions among nodes in the static graph.

Natural Question Generation with Reinforcement Learning Based Graph-to-Sequence Model

This paper proposes a novel reinforcement learning (RL) based graph-to-sequence (Graph2Seq) model for QG that outperforms previous state-of-the-art methods by a large margin on the SQuAD dataset.

Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation

This model consists of a Graph2Seq generator with a novel Bidirectional Gated Graph Neural Network based encoder to embed the passage, and a hybrid evaluator with a mixed objective combining both cross-entropy and RL losses to ensure the generation of syntactically and semantically valid text.

References

SHOWING 1-10 OF 26 REFERENCES

A Graph-to-Sequence Model for AMR-to-Text Generation

This work introduces a neural graph-to-sequence model, using a novel LSTM structure for directly encoding graph-level semantics, and shows superior results to existing methods in the literature.

Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning

This work proposes Seq2 SQL, a deep neural network for translating natural language questions to corresponding SQL queries, and releases WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables fromWikipedia that is an order of magnitude larger than comparable datasets.

Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks

This work introduces a novel general end-to-end graph- to-sequence neural encoder-decoder model that maps an input graph to a sequence of vectors and uses an attention-based LSTM method to decode the target sequence from these vectors.

Tree-to-Sequence Attentional Neural Machine Translation

This work proposes a novel end-to-end syntactic NMT model, extending a sequence- to-sequence model with the source-side phrase structure, which has an attention mechanism that enables the decoder to generate a translated word while softly aligning it with phrases as well as words of the source sentence.

Explaining structured queries in natural language

This paper represents various forms of structured queries as directed graphs and annotate the graph edges with template labels using an extensible template mechanism and presents different graph traversal strategies for efficiently exploring these graphs and composing textual query descriptions.

Incorporating Copying Mechanism in Sequence-to-Sequence Learning

This paper incorporates copying into neural network-based Seq2Seq learning and proposes a new model called CopyNet with encoder-decoder structure which can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose sub-sequences in the input sequence and put them at proper places in the output sequence.

Inductive Representation Learning on Large Graphs

GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

Summarizing Source Code using a Neural Attention Model

This paper presents the first completely datadriven approach for generating high level summaries of source code, which uses Long Short Term Memory (LSTM) networks with attention to produce sentences that describe C# code snippets and SQL queries.

Word Mover’s Embedding: From Word2Vec to Document Embedding

The Word Mover’s Embedding (WME) is proposed, a novel approach to building an unsupervised document (sentence) embedding from pre-trained word embeddings that consistently matches or outperforms state-of-the-art techniques, with significantly higher accuracy on problems of short length.

Sorry, i don't speak SPARQL: translating SPARQL queries into natural language

SPARQL2NL, a generic approach that allows verbalizing SPARQL queries, i.e., converting them into natural language, is presented, which can be integrated into applications where lay users are required to understand SParQL or to generate SPARQ queries in a direct or an indirect manner.