• Publications
  • Influence
Importance Weighted Autoencoders
TLDR
We present the importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting. Expand
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations
TLDR
We present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. Expand
Isolating Sources of Disentanglement in Variational Autoencoders
TLDR
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. Expand
Optimizing Neural Networks with Kronecker-factored Approximate Curvature
TLDR
We propose an efficient method for approximating natural gradient descent in neural networks which we call Kronecker-Factored Approximate Curvature (K-FAC). Expand
Ground truth dataset and baseline evaluations for intrinsic image algorithms
TLDR
We present a ground-truth dataset of intrinsic image decompositions for a variety of real-world objects. Expand
Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation
TLDR
We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region; hence we call our method Actor Critic using KrONEcker-Factored Trust Region. Expand
Structure Discovery in Nonparametric Regression through Compositional Kernel Search
TLDR
We define a space of kernel structures which are built compositionally by adding and multiplying a small number of base kernels which mirrors the scientific discovery process. Expand
The Reversible Residual Network: Backpropagation Without Storing Activations
TLDR
We present the Reversible Residual Network (RevNet), a variant of ResNets where each layer's activations can be reconstructed exactly from the next layer's. Expand
Automatic Construction and Natural-Language Description of Nonparametric Regression Models
TLDR
This paper presents the beginnings of an automatic statistician, focusing on regression problems. Expand
Picking Winning Tickets Before Training by Preserving Gradient Flow
TLDR
We propose a simple but effective method for pruning networks at initialization, thereby saving resources at training time as well. Expand
...
1
2
3
4
5
...