Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Hypothesis Only Baselines in Natural Language Inference
- Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, Benjamin Van Durme
- Computer Science*SEMEVAL
- 2 May 2018
This approach, which is referred to as a hypothesis-only model, is able to significantly outperform a majority-class baseline across a number of NLI datasets and suggests that statistical irregularities may allow a model to perform NLI in some datasets beyond what should be achievable without access to the context.
Gender Bias in Coreference Resolution
- Rachel Rudinger, Jason Naradowsky, Brian Leonard, Benjamin Van Durme
- Computer ScienceNAACL
- 25 April 2018
A novel, Winograd schema-style set of minimal pair sentences that differ only by pronoun gender are introduced, and systematic gender bias in three publicly-available coreference resolution systems is evaluated and confirmed.
Polylingual Topic Models
- David Mimno, H. Wallach, Jason Naradowsky, David A. Smith, A. McCallum
- Computer Science, LinguisticsEMNLP
- 6 August 2009
This work introduces a polylingual topic model that discovers topics aligned across multiple languages and demonstrates its usefulness in supporting machine translation and tracking topic trends across languages.
Noise reduction and targeted exploration in imitation learning for Abstract Meaning Representation parsing
This work achieves state-ofthe art results, and improves upon standard transition-based parsing by 4.7 F1 points, and introduces two novel extensions: noise reduction and targeted exploration.
Meta-Learning Extractors for Music Source Separation
- David Samuel, Aditya Ganeshan, Jason Naradowsky
- Computer ScienceICASSP - IEEE International Conference on…
- 17 February 2020
We propose a hierarchical meta-learning-inspired model for music source separation (Meta-TasNet) in which a generator model is used to predict the weights of individual extractor models. This enables…
Programming with a Differentiable Forth Interpreter
An end-to-end differentiable interpreter for the programming language Forth which enables programmers to write program sketches with slots that can be filled with behaviour trained from program input-output data, and shows empirically that this interpreter is able to effectively leverage different levels of prior program structure and learn complex behaviours such as sequence sorting and addition.
A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing
A discriminative model that jointly infers morphological properties and syntactic structures is proposed that outperforms both a baseline tagger in morphological disambiguation, and a pipeline parser in head selection.
Language Modeling for Morphologically Rich Languages: Character-Aware Modeling for Word-Level Prediction
- D. Gerz, Ivan Vulic, E. Ponti, Jason Naradowsky, Roi Reichart, A. Korhonen
- Computer ScienceTACL
- 13 July 2018
The main technical contribution of this work is a novel method for injecting subword-level information into semantic word vectors, integrated into the neural language modeling training, to facilitate word-level prediction.
UCL+Sheffield at SemEval-2016 Task 8: Imitation learning for AMR parsing with an alpha-bound
A novel transition-based parsing algorithm for the abstract meaning representation parsing task using exact imitation learning, in which the parser learns a statistical model by imitating the actions of an expert on the training data by applying an α-bound as a simple noise reduction technique.
Improving morphology induction by learning spelling rules
A Bayesian model for simultaneously inducing both morphology and spelling rules is developed and it is shown that the addition of spelling rules improves performance over the baseline morphology-only model.