#### Filter Results:

- Full text PDF available (23)

#### Publication Year

2006

2016

- This year (0)
- Last five years (16)

#### Publication Type

#### Co-author

#### Publication Venue

#### Data Set Used

#### Key Phrases

Learn More

- Honglak Lee, Roger B. Grosse, Rajesh Ranganath, Andrew Y. Ng
- ICML
- 2009

There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the <i>convolutional deep belief network</i>, a hierarchical generative model which scales to realistic image sizes.… (More)

- Roger B. Grosse, Micah K. Johnson, Edward H. Adelson, William T. Freeman
- 2009 IEEE 12th International Conference on…
- 2009

The intrinsic image decomposition aims to retrieve “intrinsic” properties of an image, such as shading and reflectance. To make it possible to quantitatively compare different approaches to this problem in realistic settings, we present a ground-truth dataset of intrinsic image decompositions for a variety of real-world objects. For each… (More)

- Yuri Burda, Roger B. Grosse, Ruslan Salakhutdinov
- ArXiv
- 2015

The variational autoencoder (VAE; Kingma & Welling (2014)) is a recently proposed generative model pairing a top-down generative network with a bottom-up recognition network which approximates posterior inference. It typically makes strong assumptions about posterior inference, for instance that the posterior distribution is approximately factorial, and… (More)

- Honglak Lee, Roger B. Grosse, Rajesh Ranganath, Andrew Y. Ng
- Commun. ACM
- 2011

There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks (DBNs); however, scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the <i>convolutional deep belief network</i>, a hierarchical generative model that scales to… (More)

Despite its importance, choosing the structural form of the kernel in nonparametric regression remains a black art. We define a space of kernel structures which are built compositionally by adding and multiplying a small number of base kernels. We present a method for searching over this space of structures which mirrors the scientific discovery process.… (More)

- Yuri Burda, Roger B. Grosse, Ruslan Salakhutdinov
- AISTATS
- 2015

Markov random fields (MRFs) are difficult to evaluate as generative models because computing the test log-probabilities requires the intractable partition function. Annealed importance sampling (AIS) is widely used to estimate MRF partition functions, and often yields quite accurate results. However, AIS is prone to overestimate the log-likelihood with… (More)

This paper presents the beginnings of an automatic statistician, focusing on regression problems. Our system explores an open-ended space of statistical models to discover a good explanation of a data set, and then produces a detailed report with figures and natural-language text. Our approach treats unknown regression functions non-parametrically using… (More)

- Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, Roger B. Grosse
- ArXiv
- 2016

The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of many powerful generative models is a decoder network, a parametric deep neural net that defines a generative distribution. Examples include variational autoencoders, generative adversarial networks,… (More)

Sparse coding is an unsupervised learning algorithm that learns a succinct high-level representation of the inputs given only unlabeled data; it represents each input as a sparse linear combination of a set of basis functions. Originally applied to modeling the human visual cortex, sparse coding has also been shown to be useful for self-taught learning, in… (More)

- Roger B. Grosse, Rajat Raina, Helen Kwong, Andrew Y. Ng
- UAI
- 2007

Sparse coding is an unsupervised learning algorithm that learns a succinct high-level representation of the inputs given only unlabeled data; it represents each input as a sparse linear combination of a set of basis functions. Originally applied to modeling the human visual cortex, sparse coding has also been shown to be useful for self-taught learning, in… (More)