• Corpus ID: 4736612

It was the training data pruning too!

  title={It was the training data pruning too!},
  author={Pramod Kaushik Mudrakarta and Ankur Taly and Mukund Sundararajan and Kedar Dhamdhere},
We study the current best model (KDG) for question answering on tabular data evaluated over the WikiTableQuestions dataset. Previous ablation studies performed against this model attributed the model's performance to certain aspects of its architecture. In this paper, we find that the model's performance also crucially depends on a certain pruning of the data used to train the model. Disabling the pruning step drops the accuracy of the model from 43.3% to 36.3%. The large impact on the… 

Knowledge-Aware Conversational Semantic Parsing Over Web Tables

A knowledge-aware semantic parser to improve parsing performance by integrating various types of knowledge, including grammar knowledge, expert knowledge, and external resource knowledge, which is based on a decomposable model.

Did the Model Understand the Question?

Analysis of state-of-the-art deep learning models for question answering on images, tables, and passages of text finds that these deep networks often ignore important question terms, and demonstrates that attributions can augment standard measures of accuracy and empower investigation of model performance.

Uniform-in-Phase-Space Data Selection with Iterative Normalizing Flows

The proposed framework is demonstrated as a viable pathway to enable data-e ffi cient machine learning when abundant data is available and naturally extends to high-dimensional datasets.

Learning to Generalize from Sparse and Underspecified Rewards

This work proposes Meta Reward Learning (MeRL) to construct an auxiliary reward function that provides more refined feedback for learning, and outperforms the alternative reward learning technique based on Bayesian Optimization and achieves the state-of-the-art on weakly-supervised semantic parsing.

Memory Augmented Policy Optimization for Program Synthesis and Semantic Parsing

Memory Augmented Policy Optimization is presented, a simple and novel way to leverage a memory buffer of promising trajectories to reduce the variance of policy gradient estimate and improves the sample efficiency and robustness of Policy gradient, especially on tasks with sparse rewards.



Compositional Semantic Parsing on Semi-Structured Tables

This paper proposes a logical-form driven parsing algorithm guided by strong typing constraints and shows that it obtains significant improvements over natural baselines and is made publicly available.

Learning a Natural Language Interface with Neural Programmer

This paper presents the first weakly supervised, end-to-end neural network model to induce such programs on a real-world dataset, and enhances the objective function of Neural Programmer, a neural network with built-in discrete operations, and applies it on WikiTableQuestions, a natural language question-answering dataset.

Inferring Logical Forms From Denotations

This paper generates fictitious worlds and uses crowdsourced denotations on these worlds to filter out spurious logical forms, and shows how to use dynamic programming to efficiently represent the complete set of consistent logical forms.

Machine learning as an experimental science

Machine learning is a scientific discipline and, like the fields of AI and computer science, has both theoretical and empirical aspects, making it more akin to physics and chemistry than astronomy or sociology.

Lambda Dependency-Based Compositional Semantics

This short note presents a new formal language, lambda dependency-based compositional semantics (lambda DCS) for representing logical forms in semantic parsing. By eliminating variables and making

Neural Semantic Parsing with Type Constraints for Semi-Structured Tables

A new semantic parsing model for answering compositional questions on semi-structured Wikipedia tables with a state-of-the-art accuracy and type constraints and entity linking are valuable components to incorporate in neural semantic parsers.

Neural Multi-step Reasoning for Question Answering on Semi-structured Tables

This work explores neural network models for answering multi-step reasoning questions that operate on semi-structured tables, and generates human readable logical forms from natural language questions, which are then ranked based on word and character convolutional neural networks.

This was initially the case for us until we learned from Panupong Pasupat that the dataset had been pruned