• Publications
  • Influence
Learning deep representations by mutual information estimation and maximization
This work investigates unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder. Expand
Deep Graph Infomax
We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner, which is readily applicable to both transductive and inductive learning setups. Expand
Mutual Information Neural Estimation
We present a Mutual Information Neural Estimator that is linearly scalable in dimensionality as well as in sample size, trainable through back-prop, and strongly consistent. Expand
MINE: Mutual Information Neural Estimation
This paper presents a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size. Expand
Learning Representations by Maximizing Mutual Information Across Views
We propose an approach to self-supervised representation learning based on maximizing mutual information between features extracted from multiple views of a shared context. Expand
Maximum-Likelihood Augmented Discrete Generative Adversarial Networks
Despite the successes in capturing continuous distributions, the application of generative adversarial networks (GANs) to discrete settings, like natural language tasks, is rather restricted. TheExpand
Deep learning for neuroimaging: a validation study
We show that deep learning methods are able to learn physiologically important representations and detect latent relations in neuroimaging data. Expand
Assessing dynamic brain graphs of time-varying connectivity in fMRI data: Application to healthy controls and patients with schizophrenia
Graph theory-based analysis has been widely employed in brain imaging studies, and altered topological properties of brain connectivity have emerged as important features of mental diseases such as schizophrenia. Expand
Boundary-Seeking Generative Adversarial Networks
We introduce a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator. Expand
Unsupervised State Representation Learning in Atari
State representation learning, or the ability to capture latent generative factors of an environment, is crucial for building intelligent agents that can perform a wide variety of tasks. Expand