• Publications
  • Influence
Representing Schema Structure with Graph Neural Networks for Text-to-SQL Parsing
TLDR
This paper presents an encoder-decoder semantic parser, where the structure of the DB schema is encoded with a graph neural network, and this representation is later used at both encoding and decoding time. Expand
Global Reasoning over Database Structures for Text-to-SQL Parsing
TLDR
This work uses message-passing through a graph neural network to softly select a subset of database constants for the output query, conditioned on the question, and trains a model to rank queries based on the global alignment ofdatabase constants to question words. Expand
Evaluating Models’ Local Decision Boundaries via Contrast Sets
TLDR
A more rigorous annotation paradigm for NLP that helps to close systematic gaps in the test data, and recommends that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Expand
Towards an argumentative content search engine using weak supervision
TLDR
This work uses a weak signal to define weak signals for training DNNs to obtain significantly greater performance, and adapts the system to solve a recent argument mining task of identifying argumentative sentences in Web texts retrieved from heterogeneous sources, and obtain F1 scores comparable to the supervised baseline. Expand
Language Generation with Recurrent Generative Adversarial Networks without Pre-training
TLDR
It is shown that recurrent neural networks can be trained to generate text with GANs from scratch by slowly teaching the model to generate sequences of increasing and variable length, which vastly improves the quality of generated sequences compared to a convolutional baseline. Expand
Evaluating NLP Models via Contrast Sets
TLDR
A new annotation paradigm for NLP is proposed that helps to close systematic gaps in the test data, and it is recommended that after a dataset is constructed, the dataset authors manually perturb the test instances in small but meaningful ways that change the gold label, creating contrast sets. Expand
Grammar-based Neural Text-to-SQL Generation
The sequence-to-sequence paradigm employed by neural text-to-SQL models typically performs token-level decoding and does not consider generating SQL hierarchically from a grammar. Grammar-basedExpand
Emergence of Communication in an Interactive World with Consistent Speakers
TLDR
A new model and training algorithm is proposed, that utilizes the structure of a learned representation space to produce more consistent speakers at the initial phases of training, which stabilizes learning and increases context-independence compared to policy gradient and other competitive baselines. Expand
Obtaining Faithful Interpretations from Compositional Neural Networks
TLDR
It is found that the intermediate outputs of NMNs differ from the expected output, illustrating that the network structure does not provide a faithful explanation of model behaviour, and particular choices for module architecture are proposed that yield much better faithfulness, at a minimal cost to accuracy. Expand
An autonomous debating system.
TLDR
This work presents Project Debater, an autonomous debating system that can engage in a competitive debate with humans, and highlights the fundamental differences between debating with humans as opposed to challenging humans in game competitions. Expand
...
1
2
...