Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
NewsQA: A Machine Comprehension Dataset
NewsQA, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs, is presented and analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing textual entailment.
Deep Reinforcement Learning that Matters
- Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, D. Meger
- Computer ScienceAAAI
- 19 September 2017
Challenges posed by reproducibility, proper experimental techniques, and reporting procedures are investigated and guidelines to make future results in deep RL more reproducible are suggested.
Learning Representations by Maximizing Mutual Information Across Views
This work develops a model which learns image representations that significantly outperform prior methods on the tasks the authors consider, and extends this model to use mixture-based representations, where segmentation behaviour emerges as a natural side-effect.
Augmented CycleGAN: Learning Many-to-Many Mappings from Unpaired Data
- Amjad Almahairi, Sai Rajeswar, Alessandro Sordoni, Philip Bachman, Aaron C. Courville
- Computer ScienceICML
- 27 February 2018
This work proposes a new model, called Augmented CycleGAN, which learns many-to-many mappings between domains, and examines it qualitatively and quantitatively on several image datasets.
Learning with Pseudo-Ensembles
A novel regularizer based on making the behavior of a pseudo-ensemble robust with respect to the noise process generating it is presented, which naturally extends to the semi-supervised setting, where it produces state-of-the-art results.
Learning Algorithms for Active Learning
A model that learns active learning algorithms via metalearning jointly learns: a data representation, an item selection heuristic, and a prediction function for a distribution of related tasks.
Iterative Alternating Neural Attention for Machine Reading
This work proposes a novel neural attention architecture to tackle machine comprehension tasks, such as answering Cloze-style queries with respect to a document that outperforms state-of-the-art baselines in standard machine comprehension benchmarks such as CNN news articles and the Children’s Book Test dataset.
Machine Comprehension by Text-to-Text Neural Question Generation
A recurrent neural model is proposed that generates natural-language questions from documents, conditioned on answers, and fine-tune the model using policy gradient techniques to maximize several rewards that measure question quality.
Calibrating Energy-based Generative Adversarial Networks
- Zihang Dai, Amjad Almahairi, Philip Bachman, E. Hovy, Aaron C. Courville
- Computer ScienceICLR
- 1 February 2017
A flexible adversarial training framework is proposed, and it is proved this framework not only ensures the generator converges to the true data distribution, but also enables the discriminator to retain the density information at the global optimal.
Data-Efficient Reinforcement Learning with Self-Predictive Representations
- Max Schwarzer, Ankesh Anand, Rishab Goel, R. Devon Hjelm, Aaron C. Courville, Philip Bachman
- Computer ScienceICLR
- 12 July 2020
The method, Self-Predictive Representations (SPR), trains an agent to predict its own latent state representations multiple steps into the future using an encoder which is an exponential moving average of the agent’s parameters and a learned transition model.