• Publications
  • Influence
Polylingual Topic Models
TLDR
We introduce a polylingual topic model that discovers topics aligned across multiple languages and demonstrate its usefulness in supporting machine translation and tracking topic trends across languages. Expand
  • 338
  • 44
  • PDF
Hypothesis Only Baselines in Natural Language Inference
TLDR
We propose a hypothesis only baseline for diagnosing Natural Language Inference (NLI). Expand
  • 217
  • 35
  • PDF
Gender Bias in Coreference Resolution
TLDR
We present a novel, Winograd schema-style set of minimal pair sentences that differ only by pronoun gender. Expand
  • 148
  • 24
  • PDF
Programming with a Differentiable Forth Interpreter
TLDR
We present an end-to-end differentiable interpreter for the programming language Forth which enables programmers to write program sketches with slots that can be filled with behaviour trained from program input-output data. Expand
  • 103
  • 9
  • PDF
A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing
TLDR
We propose a discriminative model that jointly infers morphological properties and syntactic structures of a sentence, given its words as input. Expand
  • 36
  • 7
  • PDF
Noise reduction and targeted exploration in imitation learning for Abstract Meaning Representation parsing
TLDR
In this work we build on the transition-based parsing approach of Wang et al. (2015b) and explore the applicability of different imitation algorithms to AMR parsing, which has a more complex output space than those considered previously. Expand
  • 37
  • 7
  • PDF
UCL+Sheffield at SemEval-2016 Task 8: Imitation learning for AMR parsing with an alpha-bound
TLDR
We develop a novel transition-based parsing algorithm for the abstract meaning representation parsing task using exact imitation learning, in which the parser learns a statistical model by imitating the actions of an expert on the training data. Expand
  • 17
  • 4
  • PDF
Improving morphology induction by learning spelling rules
TLDR
We present a Bayesian model of morphology that identifies the latent underlying morphological analysis of each word (shut+ing)2 along with spelling rules that generate the observed surface forms. Expand
  • 31
  • 4
  • PDF
Meta-Learning Extractors for Music Source Separation
TLDR
We propose a hierarchical meta-learning-inspired model for music source separation (Meta-TasNet) in which a generator model is used to predict the weights of individual extractor models. Expand
  • 12
  • 4
  • PDF
Language Modeling for Morphologically Rich Languages: Character-Aware Modeling for Word-Level Prediction
TLDR
We present a novel method for injecting subword-level information into semantic word vectors, integrated into the neural language modeling training, to facilitate word-level prediction. Expand
  • 26
  • 3
  • PDF
...
1
2
3
...