• Publications
  • Influence
Hypothesis Only Baselines in Natural Language Inference
TLDR
This approach, which is referred to as a hypothesis-only model, is able to significantly outperform a majority-class baseline across a number of NLI datasets and suggests that statistical irregularities may allow a model to perform NLI in some datasets beyond what should be achievable without access to the context.
Gender Bias in Coreference Resolution
TLDR
A novel, Winograd schema-style set of minimal pair sentences that differ only by pronoun gender are introduced, and systematic gender bias in three publicly-available coreference resolution systems is evaluated and confirmed.
Polylingual Topic Models
TLDR
This work introduces a polylingual topic model that discovers topics aligned across multiple languages and demonstrates its usefulness in supporting machine translation and tracking topic trends across languages.
Noise reduction and targeted exploration in imitation learning for Abstract Meaning Representation parsing
TLDR
This work achieves state-ofthe art results, and improves upon standard transition-based parsing by 4.7 F1 points, and introduces two novel extensions: noise reduction and targeted exploration.
Meta-Learning Extractors for Music Source Separation
We propose a hierarchical meta-learning-inspired model for music source separation (Meta-TasNet) in which a generator model is used to predict the weights of individual extractor models. This enables
Programming with a Differentiable Forth Interpreter
TLDR
An end-to-end differentiable interpreter for the programming language Forth which enables programmers to write program sketches with slots that can be filled with behaviour trained from program input-output data, and shows empirically that this interpreter is able to effectively leverage different levels of prior program structure and learn complex behaviours such as sequence sorting and addition.
A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing
TLDR
A discriminative model that jointly infers morphological properties and syntactic structures is proposed that outperforms both a baseline tagger in morphological disambiguation, and a pipeline parser in head selection.
Language Modeling for Morphologically Rich Languages: Character-Aware Modeling for Word-Level Prediction
TLDR
The main technical contribution of this work is a novel method for injecting subword-level information into semantic word vectors, integrated into the neural language modeling training, to facilitate word-level prediction.
UCL+Sheffield at SemEval-2016 Task 8: Imitation learning for AMR parsing with an alpha-bound
TLDR
A novel transition-based parsing algorithm for the abstract meaning representation parsing task using exact imitation learning, in which the parser learns a statistical model by imitating the actions of an expert on the training data by applying an α-bound as a simple noise reduction technique.
Improving morphology induction by learning spelling rules
TLDR
A Bayesian model for simultaneously inducing both morphology and spelling rules is developed and it is shown that the addition of spelling rules improves performance over the baseline morphology-only model.
...
...