• Publications
  • Influence
Deep Graph Infomax
TLDR
Deep Graph Infomax (DGI) is presented, a general approach for learning node representations within graph-structured data in an unsupervised manner that is readily applicable to both transductive and inductive learning setups.
Learning deep representations by mutual information estimation and maximization
TLDR
It is shown that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation’s suitability for downstream tasks and is an important step towards flexible formulations of representation learning objectives for specific end-goals.
Mutual Information Neural Estimation
TLDR
A Mutual Information Neural Estimator (MINE) is presented that is linearly scalable in dimensionality as well as in sample size, trainable through back-prop, and strongly consistent, and applied to improve adversarially trained generative models.
MINE: Mutual Information Neural Estimation
This paper presents a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size. MINE is back-propable and we prove that it is strongly
Learning Representations by Maximizing Mutual Information Across Views
TLDR
This work develops a model which learns image representations that significantly outperform prior methods on the tasks the authors consider, and extends this model to use mixture-based representations, where segmentation behaviour emerges as a natural side-effect.
Maximum-Likelihood Augmented Discrete Generative Adversarial Networks
TLDR
This work derives a novel and low-variance GAN objective using the discriminator's output that follows corresponds to the log-likelihood, which is proved to be consistent in theory and beneficial in practice.
Boundary-Seeking Generative Adversarial Networks
TLDR
This work introduces a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator.
Unsupervised State Representation Learning in Atari
TLDR
This work introduces a method that learns state representations by maximizing mutual information across spatially and temporally distinct features of a neural encoder of the observations and introduces a new benchmark based on Atari 2600 games to evaluate representations based on how well they capture the ground truth state variables.
Deep learning for neuroimaging: a validation study
TLDR
The results show that deep learning methods are able to learn physiologically important representations and detect latent relations in neuroimaging data.
Data-Efficient Reinforcement Learning with Self-Predictive Representations
TLDR
The method, Self-Predictive Representations (SPR), trains an agent to predict its own latent state representations multiple steps into the future using an encoder which is an exponential moving average of the agent’s parameters and a learned transition model.
...
...