Neural Semantic Parsing with Type Constraints for Semi-Structured Tables

@inproceedings{Krishnamurthy2017NeuralSP,
  title={Neural Semantic Parsing with Type Constraints for Semi-Structured Tables},
  author={Jayant Krishnamurthy and Pradeep Dasigi and Matt Gardner},
  booktitle={EMNLP},
  year={2017}
}
We present a new semantic parsing model for answering compositional questions on semi-structured Wikipedia tables. [] Key Method We also introduce a novel method for training our neural model with question-answer supervision. On the WIKITABLEQUESTIONS data set, our parser achieves a state-of-theart accuracy of 43.3% for a single model and 45.9% for a 5-model ensemble, improving on the best prior score of 38.7% set by a 15-model ensemble. These results suggest that type constraints and entity linking are…

Figures and Tables from this paper

Learning an Executable Neural Semantic Parser
TLDR
A neural semantic parser that maps natural language utterances onto logical forms that can be executed against a task-specific environment, such as a knowledge base or a database, to produce a response.
Grammar-Constrained Neural Semantic Parsing with LR Parsers
TLDR
This work implements an attentional SEQ2SEQ model that uses an LR parser to maintain syntactically valid sequences throughout the decoding procedure and integrates seamlessly with current SEQ 2SEQ frameworks.
Question Generation from SQL Queries Improves Neural Semantic Parsing
TLDR
This study conducts a study on WikiSQL, the largest hand-annotated semantic parsing dataset to date, and demonstrates that question generation is an effective method that empowers us to learn a state-of-the-art neural network based semantic parser with thirty percent of the supervised training data.
Reranking for Neural Semantic Parsing
TLDR
This paper presents a simple approach to quickly iterate and improve the performance of an existing neural semantic parser by reranking an n-best list of predicted MRs, using features that are designed to fix observed problems with baseline models.
A Hybrid Semantic Parsing Approach for Tabular Data Analysis
This paper presents a novel approach to translating natural language questions to SQL queries for given tables, which meets three requirements as a real-world data analysis application: cross-domain,
Learning Semantic Parsers from Denotations with Latent Structured Alignments and Abstract Programs
TLDR
This work capitalize on the intuition that correct programs would likely respect certain structural constraints were they to be aligned to the question and propose to model alignments as structured latent variables as part of the latent-alignment framework.
TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data
TLDR
TaBERT is a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables that achieves new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider.
Learning to Map Frequent Phrases to Sub-Structures of Meaning Representation for Neural Semantic Parsing
TLDR
This paper proposes that the vocabulary-mismatch problem can be effectively resolved by leveraging appropriate logical tokens, and exploits macro actions, which are of the same granularity of words/phrases, and allow the model to learn mappings from frequent phrases to corresponding sub-structures of meaning representation.
Training Naturalized Semantic Parsers with Very Little Data
TLDR
This work introduces an automated methodology that delivers very signif-icant additional improvements by utilizing modest amounts of unannotated data, which is typically easy to obtain, and delivers new SOTA few-shot performance on the Overnight dataset, particularly in very low-resource settings.
Iterative Utterance Segmentation for Neural Semantic Parsing
TLDR
A novel framework for boosting neural semantic parsers via iterative utterance segmentation that iterates between two neural modules: a segmenter for segmenting a span from the utterance, and a parser for mapping the span into a partial meaning representation.
...
...

References

SHOWING 1-10 OF 35 REFERENCES
Compositional Semantic Parsing on Semi-Structured Tables
TLDR
This paper proposes a logical-form driven parsing algorithm guided by strong typing constraints and shows that it obtains significant improvements over natural baselines and is made publicly available.
Data Recombination for Neural Semantic Parsing
TLDR
Data recombination improves the accuracy of the RNN model on three semantic parsing datasets, leading to new state-of-the-art performance on the standard GeoQuery dataset for models with comparable supervision.
Learning Dependency-Based Compositional Semantics
TLDR
A new semantic formalism, dependency-based compositional semantics (DCS) is developed and a log-linear distribution over DCS logical forms is defined and it is shown that the system obtains comparable accuracies to even state-of-the-art systems that do require annotated logical forms.
Semantic Parsing on Freebase from Question-Answer Pairs
TLDR
This paper trains a semantic parser that scales up to Freebase and outperforms their state-of-the-art parser on the dataset of Cai and Yates (2013), despite not having annotated logical forms.
Weakly Supervised Training of Semantic Parsers
TLDR
This work presents a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences, and demonstrates recovery of this richer structure by extracting logical forms from natural language queries against Freebase.
Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base
TLDR
This work proposes a novel semantic parsing framework for question answering using a knowledge base that leverages the knowledge base in an early stage to prune the search space and thus simplifies the semantic matching problem.
Scaling Semantic Parsers with On-the-Fly Ontology Matching
TLDR
A new semantic parsing approach that learns to resolve ontological mismatches, which is learned from question-answer pairs, uses a probabilistic CCG to build linguistically motivated logicalform meaning representations, and includes an ontology matching model that adapts the output logical forms for each target ontology.
Learning a Natural Language Interface with Neural Programmer
TLDR
This paper presents the first weakly supervised, end-to-end neural network model to induce such programs on a real-world dataset, and enhances the objective function of Neural Programmer, a neural network with built-in discrete operations, and applies it on WikiTableQuestions, a natural language question-answering dataset.
Neural Multi-step Reasoning for Question Answering on Semi-structured Tables
TLDR
This work explores neural network models for answering multi-step reasoning questions that operate on semi-structured tables, and generates human readable logical forms from natural language questions, which are then ranked based on word and character convolutional neural networks.
Language to Logical Form with Neural Attention
TLDR
This paper presents a general method based on an attention-enhanced encoder-decoder model that encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors.
...
...