Efficient sparse coding algorithms

Abstract

Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L 1-regularized least squares problem and an L 2-constrained least squares problem. We propose novel algorithms to solve both of these optimization problems. Our algorithms result in a significant speedup for sparse coding, allowing us to learn larger sparse codes than possible with previously described algorithms. We apply these algorithms to natural images and demonstrate that the inferred sparse codes exhibit end-stopping and non-classical receptive field surround suppression and, therefore, may provide a partial explanation for these two phenomena in V1 neurons.

Extracted Key Phrases

5 Figures and Tables

Showing 1-10 of 15 references

Self-taught learning

  • R Raina, A Battle, H Lee, B Packer, A Y Ng
  • 2006
1 Excerpt
Showing 1-10 of 1,109 extracted citations
0100200300'06'07'08'09'10'11'12'13'14'15'16'17
Citations per Year

2,091 Citations

Semantic Scholar estimates that this publication has received between 1,877 and 2,330 citations based on the available data.

See our FAQ for additional information.