$rm K$-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation

@article{Aharon2006rmKA,
  title={\$rm K\$-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation},
  author={Michal Aharon and Michael Elad and Alfred Marcel Bruckstein},
  journal={IEEE Transactions on Signal Processing},
  year={2006},
  volume={54},
  pages={4311-4322}
}
In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with… 

Figures from this paper

K-SVD : An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation
TLDR
A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, K-SVD, an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data.
K-SVD and its non-negative variant for dictionary design
TLDR
A simple and yet efficient variation of the K-SVD that handles such extraction of non-negative dictionaries is presented, and its generalization to nonnegative matrix factorization problem that suits signals generated under an additive model with positive atoms is described.
Dictionary Optimization for Block-Sparse Representations
TLDR
This paper proposes an algorithm for learning a block-sparsifying dictionary of a given set of signals that does not require prior knowledge on the association of signals into groups, and develops a method that automatically detects the underlying block structure given the maximal size of those groups.
Orthogonal Procrustes Analysis for Dictionary Learning in Sparse Linear Representation
TLDR
R-SVD is presented, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups and its robustness and wide applicability are confirmed.
Submodular Dictionary Selection for Sparse Representation
TLDR
An efficient learning framework to construct signal dictionaries for sparse representation by selecting the dictionary columns from multiple candidate bases is developed and it is shown that if the available dictionary column vectors are incoherent, the objective function satisfies approximate submodularity.
Dictionary design for sparse signal representations using K-SVD with sparse Bayesian learning
  • Ribhu, D. Ghosh
  • Computer Science
    2012 IEEE 11th International Conference on Signal Processing
  • 2012
TLDR
This paper proposes to counter this problem by using Sparse Bayesian Learning in the initial stage of the K-SVD algorithm, offering gradual convergence of the learning algorithm from a non-sparse representation of the signals to a sparse representation as the iterations progress, giving the training vectors a good enough chance to “spread out” over the dictionary.
Bayesian K-SVD Using Fast Variational Inference
TLDR
A fully-automated Bayesian method is proposed that considers the uncertainty of the estimates and produces a sparse representation of the data without prior information on the number of non-zeros in each representation vector and develops an efficient variational inference framework that reduces computational complexity.
Greedy Dictionary Selection for Sparse Representation
TLDR
An efficient learning framework to construct signal dictionaries for sparse representation by selecting the dictionary columns from multiple candidate bases is developed and it is shown that if the available dictionary column vectors are incoherent, the objective function satisfies approximate submodularity.
K-SVD dictionary-learning for the analysis sparse model
TLDR
The goal is to learn the analysis dictionary from a set of signal examples, and the approach taken is parallel and similar to the one adopted by the K-SVD algorithm that serves the corresponding problem in the synthesis model.
Overcomplete Dictionary Design by Empirical Risk Minimization
TLDR
This paper presents a new approach for dictionary learning based on minimizing the empirical risk, and offers incorporation of non-injective and nonlinear operators, where the data and the recovered parameters may reside in different spaces.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 55 REFERENCES
K-SVD : An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation
TLDR
A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, K-SVD, an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data.
Dictionary Learning Algorithms for Sparse Representation
TLDR
Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave negative log priors, showing improved performance over other independent component analysis methods.
FOCUSS-based dictionary learning algorithms
TLDR
To learn an environmentally-adapted dictionary capable of concise expression of signals generated by the environment, this work develops algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS, an affine scaling transformation (ACT)-like sparse signal representation algorithm recently developed at UCSD, and an update of the dictionary using these sparse representations.
Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization
  • D. Donoho, Michael Elad
  • Computer Science
    Proceedings of the National Academy of Sciences of the United States of America
  • 2003
TLDR
This article obtains parallel results in a more general setting, where the dictionary D can arise from two or several bases, frames, or even less structured systems, and sketches three applications: separating linear features from planar ones in 3D data, noncooperative multiuser encoding, and identification of over-complete independent component models.
Stable recovery of sparse overcomplete representations in the presence of noise
TLDR
This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system and shows that similar stability is also available using the basis and the matching pursuit algorithms.
Learning unions of orthonormal bases with thresholded singular value decomposition
TLDR
It is shown that it is possible to design an iterative learning algorithm that produces a dictionary with the required structure, and how well the learning algorithm recovers dictionaries that may or may not have the necessary structure is assessed.
Greed is good: algorithmic results for sparse approximation
  • J. Tropp
  • Computer Science
    IEEE Transactions on Information Theory
  • 2004
TLDR
This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries and develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal.
Learning Overcomplete Representations
TLDR
It is shown that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency and provide a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures.
An improved FOCUSS-based learning algorithm for solving sparse linear inverse problems
  • J. Murray, K. Kreutz-Delgado
  • Computer Science
    Conference Record of Thirty-Fifth Asilomar Conference on Signals, Systems and Computers (Cat.No.01CH37256)
  • 2001
TLDR
An improved algorithm for solving blind sparse linear inverse problems where both the dictionary and the sources are unknown is developed, and it is shown that a learned overcomplete representation can encode the data more efficiently than a complete basis at the same level of accuracy.
On sparse representations in arbitrary redundant bases
  • J. Fuchs
  • Mathematics
    IEEE Transactions on Information Theory
  • 2004
TLDR
The purpose of this contribution is to generalize some recent results on sparse representations of signals in redundant bases and give a sufficient condition for the unique sparsest solution to be the unique solution to both a linear program or a parametrized quadratic program.
...
1
2
3
4
5
...