• Publications
  • Influence
NewsQA: A Machine Comprehension Dataset
TLDR
NewsQA, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs, is presented and analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing textual entailment.
Deep Reinforcement Learning that Matters
TLDR
Challenges posed by reproducibility, proper experimental techniques, and reporting procedures are investigated and guidelines to make future results in deep RL more reproducible are suggested.
Learning Representations by Maximizing Mutual Information Across Views
TLDR
This work develops a model which learns image representations that significantly outperform prior methods on the tasks the authors consider, and extends this model to use mixture-based representations, where segmentation behaviour emerges as a natural side-effect.
Augmented CycleGAN: Learning Many-to-Many Mappings from Unpaired Data
TLDR
This work proposes a new model, called Augmented CycleGAN, which learns many-to-many mappings between domains, and examines it qualitatively and quantitatively on several image datasets.
Learning with Pseudo-Ensembles
TLDR
A novel regularizer based on making the behavior of a pseudo-ensemble robust with respect to the noise process generating it is presented, which naturally extends to the semi-supervised setting, where it produces state-of-the-art results.
Learning Algorithms for Active Learning
TLDR
A model that learns active learning algorithms via metalearning jointly learns: a data representation, an item selection heuristic, and a prediction function for a distribution of related tasks.
Iterative Alternating Neural Attention for Machine Reading
TLDR
This work proposes a novel neural attention architecture to tackle machine comprehension tasks, such as answering Cloze-style queries with respect to a document that outperforms state-of-the-art baselines in standard machine comprehension benchmarks such as CNN news articles and the Children’s Book Test dataset.
Machine Comprehension by Text-to-Text Neural Question Generation
TLDR
A recurrent neural model is proposed that generates natural-language questions from documents, conditioned on answers, and fine-tune the model using policy gradient techniques to maximize several rewards that measure question quality.
Calibrating Energy-based Generative Adversarial Networks
TLDR
A flexible adversarial training framework is proposed, and it is proved this framework not only ensures the generator converges to the true data distribution, but also enables the discriminator to retain the density information at the global optimal.
Data-Efficient Reinforcement Learning with Self-Predictive Representations
TLDR
The method, Self-Predictive Representations (SPR), trains an agent to predict its own latent state representations multiple steps into the future using an encoder which is an exponential moving average of the agent’s parameters and a learned transition model.
...
1
2
3
4
...