Neural Semantic Parsing with Type Constraints for Semi-Structured Tables

  title={Neural Semantic Parsing with Type Constraints for Semi-Structured Tables},
  author={Jayant Krishnamurthy and Pradeep Dasigi and Matt Gardner},
We present a new semantic parsing model for answering compositional questions on semi-structured Wikipedia tables. [] Key Method We also introduce a novel method for training our neural model with question-answer supervision. On the WIKITABLEQUESTIONS data set, our parser achieves a state-of-theart accuracy of 43.3% for a single model and 45.9% for a 5-model ensemble, improving on the best prior score of 38.7% set by a 15-model ensemble. These results suggest that type constraints and entity linking are…

Figures and Tables from this paper

Learning an Executable Neural Semantic Parser

A neural semantic parser that maps natural language utterances onto logical forms that can be executed against a task-specific environment, such as a knowledge base or a database, to produce a response.

Grammar-Constrained Neural Semantic Parsing with LR Parsers

This work implements an attentional SEQ2SEQ model that uses an LR parser to maintain syntactically valid sequences throughout the decoding procedure and integrates seamlessly with current SEQ 2SEQ frameworks.

Question Generation from SQL Queries Improves Neural Semantic Parsing

This study conducts a study on WikiSQL, the largest hand-annotated semantic parsing dataset to date, and demonstrates that question generation is an effective method that empowers us to learn a state-of-the-art neural network based semantic parser with thirty percent of the supervised training data.

Reranking for Neural Semantic Parsing

This paper presents a simple approach to quickly iterate and improve the performance of an existing neural semantic parser by reranking an n-best list of predicted MRs, using features that are designed to fix observed problems with baseline models.

A Hybrid Semantic Parsing Approach for Tabular Data Analysis

This paper presents a novel approach to translating natural language questions to SQL queries for given tables, which meets three requirements as a real-world data analysis application: cross-domain,

Learning Semantic Parsers from Denotations with Latent Structured Alignments and Abstract Programs

This work capitalize on the intuition that correct programs would likely respect certain structural constraints were they to be aligned to the question and propose to model alignments as structured latent variables as part of the latent-alignment framework.

Training Naturalized Semantic Parsers with Very Little Data

This paper introduces an automated methodology that delivers very significant additional improvements by utilizing modest amounts of unannotated data, which is typically easy to obtain and shows new SOTA few-shot performance on the Overnight dataset, particularly in very low-resource settings.

TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data

TaBERT is a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables that achieves new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider.

Learning to Map Frequent Phrases to Sub-Structures of Meaning Representation for Neural Semantic Parsing

This paper proposes that the vocabulary-mismatch problem can be effectively resolved by leveraging appropriate logical tokens, and exploits macro actions, which are of the same granularity of words/phrases, and allow the model to learn mappings from frequent phrases to corresponding sub-structures of meaning representation.

Compositional pre-training for neural semantic parsing

The proposed two-stage framework for augmentation of semantic parsing is beneficial for improving the parsing accuracy in a standard dataset called GeoQuery for the task of generating logical forms from a set of questions about the US geography.



Compositional Semantic Parsing on Semi-Structured Tables

This paper proposes a logical-form driven parsing algorithm guided by strong typing constraints and shows that it obtains significant improvements over natural baselines and is made publicly available.

Data Recombination for Neural Semantic Parsing

Data recombination improves the accuracy of the RNN model on three semantic parsing datasets, leading to new state-of-the-art performance on the standard GeoQuery dataset for models with comparable supervision.

Learning Dependency-Based Compositional Semantics

A new semantic formalism, dependency-based compositional semantics (DCS) is developed and a log-linear distribution over DCS logical forms is defined and it is shown that the system obtains comparable accuracies to even state-of-the-art systems that do require annotated logical forms.

Semantic Parsing on Freebase from Question-Answer Pairs

This paper trains a semantic parser that scales up to Freebase and outperforms their state-of-the-art parser on the dataset of Cai and Yates (2013), despite not having annotated logical forms.

Weakly Supervised Training of Semantic Parsers

This work presents a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences, and demonstrates recovery of this richer structure by extracting logical forms from natural language queries against Freebase.

Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base

This work proposes a novel semantic parsing framework for question answering using a knowledge base that leverages the knowledge base in an early stage to prune the search space and thus simplifies the semantic matching problem.

Scaling Semantic Parsers with On-the-Fly Ontology Matching

A new semantic parsing approach that learns to resolve ontological mismatches, which is learned from question-answer pairs, uses a probabilistic CCG to build linguistically motivated logicalform meaning representations, and includes an ontology matching model that adapts the output logical forms for each target ontology.

Learning a Natural Language Interface with Neural Programmer

This paper presents the first weakly supervised, end-to-end neural network model to induce such programs on a real-world dataset, and enhances the objective function of Neural Programmer, a neural network with built-in discrete operations, and applies it on WikiTableQuestions, a natural language question-answering dataset.

Neural Multi-step Reasoning for Question Answering on Semi-structured Tables

This work explores neural network models for answering multi-step reasoning questions that operate on semi-structured tables, and generates human readable logical forms from natural language questions, which are then ranked based on word and character convolutional neural networks.

Language to Logical Form with Neural Attention

This paper presents a general method based on an attention-enhanced encoder-decoder model that encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors.