Share This Author
Neural Module Networks for Reasoning over Text
This work extends Neural module networks by introducing modules that reason over a paragraph of text, performing symbolic reasoning over numbers and dates in a probabilistic and differentiable manner, and proposing an unsupervised auxiliary loss to help extract arguments associated with the events in text.
Reasoning Over Paragraph Effects in Situations
This work presents ROPES, a challenging benchmark for reading comprehension targeting Reasoning Over Paragraph Effects in Situations, and targets expository language describing causes and effects, as they have clear implications for new situations.
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
It is shown that large models are more robust to compression techniques such as quantization and pruning than small models, and one can get the best of both worlds: heavily compressed, large models achieve higher accuracy than lightly compressed, small models.
QuaRTz: An Open-Domain Dataset of Qualitative Relationship Questions
This work introduces the first open-domain dataset, called QuaRTz, for reasoning about textual qualitative relationships, and finds state-of-the-art results are substantially (20%) below human performance, presenting an open challenge to the NLP community.
Evaluating NLP Models via Contrast Sets
A new annotation paradigm for NLP is proposed that helps to close systematic gaps in the test data, and it is recommended that after a dataset is constructed, the dataset authors manually perturb the test instances in small but meaningful ways that change the gold label, creating contrast sets.
Constructing Taxonomies from Pretrained Language Models
A method for constructing taxonomic trees (e.g., WordNet) using pretrained language models using a module that predicts parenthood relations and another that reconciles those pairwise predictions into trees is presented.
Grammar-based Neural Text-to-SQL Generation
The sequence-to-sequence paradigm employed by neural text-to-SQL models typically performs token-level decoding and does not consider generating SQL hierarchically from a grammar. Grammar-based…
DeepBase: Deep Inspection of Neural Networks
- Thibault Sellam, Kevin Lin, I. Huang, Michelle Yang, Carl Vondrick, Eugene Wu
- Computer ScienceSIGMOD Conference
- 13 August 2018
DeepBase is described, a system to inspect neural network behaviors through a unified interface that model logic with user-provided hypothesis functions that annotate the data with high-level labels and lets users quickly identify individual or groups of units that have strong statistical dependencies with desired hypotheses.
“ I Like the Way You Think ! ” Inspecting the Internal Logic of Recurrent Neural Networks
Inducing Taxonomic Knowledge from Pretrained Transformers
A method for inducing taxonomic trees from pretrained transformers by assigning a score for the likelihood that each pair of terms forms a parent-child relation and producing the maximum spanning tree.