Online dictionary learning for sparse coding

Abstract

Sparse coding---that is, modelling data vectors as sparse linear combinations of basis elements---is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on <i>learning</i> the basis set, also called dictionary, to adapt it to specific data, an approach that has recently proven to be very effective for signal reconstruction and classification in the audio and image processing domains. This paper proposes a new online optimization algorithm for dictionary learning, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples. A proof of convergence is presented, along with experiments with natural images demonstrating that it leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.

DOI: 10.1145/1553374.1553463
View Slides

Extracted Key Phrases

01002002009201020112012201320142015201620172018
Citations per Year

1,211 Citations

Semantic Scholar estimates that this publication has 1,211 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Mairal2009OnlineDL, title={Online dictionary learning for sparse coding}, author={Julien Mairal and Francis R. Bach and Jean Ponce and Guillermo Sapiro}, booktitle={ICML}, year={2009} }