Learning Sparse Codes with a Mixture-of-Gaussians Prior


We describe a method for learning an overcomplete set of basis functions for the purpose of modeling sparse structure in images. The sparsity of the basis function coeecients is modeled with a mixture-of-Gaussians distribution. One Gaussian captures non-active coeecients with a small-variance distribution centered at zero, while one or more other Gaussians capture active coeecients with a large-variance distribution. We show that when the prior is in such a form, there exist eecient methods for learning the basis functions as well as the parameters of the prior. The performance of the algorithm is demonstrated on a number of test cases and also on natural images. The basis functions learned on natural images are similar to those obtained with other methods, but the sparse form of the coeecient distribution is much better described. Also, since the parameters of the prior are adapted to the data, no assumption about sparse structure in the images need be made a priori, rather it is learned from the data.

Extracted Key Phrases


Citations per Year

151 Citations

Semantic Scholar estimates that this publication has 151 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Olshausen1999LearningSC, title={Learning Sparse Codes with a Mixture-of-Gaussians Prior}, author={Bruno A. Olshausen and K. Jarrod Millman}, booktitle={NIPS}, year={1999} }