On the Computational Intractability of Exact and Approximate Dictionary Learning

@article{Tillmann2015OnTC,
  title={On the Computational Intractability of Exact and Approximate Dictionary Learning},
  author={Andreas M. Tillmann},
  journal={IEEE Signal Processing Letters},
  year={2015},
  volume={22},
  pages={45-49}
}
The efficient sparse coding and reconstruction of signal vectors via linear observations has received a tremendous amount of attention over the last decade. In this context, the automated learning of a suitable basis or overcomplete dictionary from training data sets of certain signal classes for use in sparse representations has turned out to be of particular importance regarding practical signal processing applications. Most popular dictionary learning algorithms involve NP-hard sparse… 

Robust Identifiability in Sparse Dictionary Learning

Whenever the conditions to one of the robust identifiability theorems are met, any sparsity-constrained algorithm that succeeds in approximately reconstructing the data well enough recovers the original dictionary and sparse codes up to an error commensurate with the noise.

Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization

This paper establishes local linear convergence for this variant of alternating minimization for sparse coding and establishes that the basin of attraction for the global optimum is $\order{1/s^2}$, where $s$ is the sparsity level in each sample and the dictionary satisfies RIP.

On the Uniqueness and Stability of Dictionaries for Sparse Representation of Noisy Signals

This work demonstrates that some or all original dictionary elements are recoverable from noisy data even if the dictionary fails to satisfy the spark condition, its size is overestimated, or only a polynomial number of distinct sparse supports appear in the data.

Dictionary learning with equiprobable matching pursuit

It is demonstrated via simulation experiments that dictionary learning with equiprobable selection results in higher entropy of the sparse representation and lower reconstruction and denoising errors, both in the case of ordinary matching pursuit and orthogonal matching pursuit with shift-invariant dictionaries.

Approximate Guarantees for Dictionary Learning

The goal of this work is to understand what can be said in the absence of assumptions, and it is shown that the algorithmic ideas apply to a setting in which some of the columns of $X$ are outliers, thus giving similar guarantees even in this challenging setting.

Learning fast sparsifying overcomplete dictionaries

A dictionary learning method that builds an over complete dictionary that is computationally efficient to manipulate, i.e., sparse approximation algorithms have sub-quadratic computationally complexity is proposed.

Learning Fast Sparsifying Transforms

This paper constructs orthogonal and nonorthogonal dictionaries that are factorized as a product of a few basic transformations and shows how the proposed transforms can balance very well data representation performance and computational complexity.

Explorer Learning Fast Sparsifying Transforms

This paper constructs orthogonal and nonorthogonal dictionaries that are factorized as a product of a few basic transformations and shows how the proposed transforms can balance very well data representation performance and computational complexity.

Testing Sparsity over Known and Unknown Bases

A testing algorithm which projects the input vectors to O(log p/\eps^2) dimensions and assumes that the unknown A satisfies k-restricted isometry and gives a new robust characterization of gaussian width in terms of sparsity.

Learning Sparsely Used Overcomplete Dictionaries

We consider the problem of learning sparsely used overcomplete dictionaries, where each observation is a sparse combination of elements from an unknown overcomplete dictionary. We establish exact
...

References

SHOWING 1-10 OF 52 REFERENCES

$rm K$-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation

A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, the K-SVD algorithm, an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data.

K-SVD : An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation

A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, K-SVD, an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data.

Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization

This paper establishes local linear convergence for this variant of alternating minimization for sparse coding and establishes that the basin of attraction for the global optimum is $\order{1/s^2}$, where $s$ is the sparsity level in each sample and the dictionary satisfies RIP.

Dictionary Learning Algorithms for Sparse Representation

Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave negative log priors, showing improved performance over other independent component analysis methods.

Exact Recovery of Sparsely-Used Dictionaries

A polynomial-time algorithm, called Exact Recovery of Sparsely-Used Dictionaries (ERSpUD), is designed and it is proved that it probably recovers the dictionary and coefficient matrix when the coefficient matrix is sufficiently sparse.

Learning unions of orthonormal bases with thresholded singular value decomposition

It is shown that it is possible to design an iterative learning algorithm that produces a dictionary with the required structure, and how well the learning algorithm recovers dictionaries that may or may not have the necessary structure is assessed.

Regularized dictionary learning for sparse approximation

A general formulation that allows the dictionary to be learned form the data with some a priori information about the dictionary is proposed and practical algorithms are presented to minimize this cost under different constraints on the dictionary.

Online Learning for Matrix Factorization and Sparse Coding

A new online optimization algorithm is proposed, based on stochastic approximations, which scales up gracefully to large data sets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems.

Double Sparsity: Learning Sparse Dictionaries for Sparse Signal Approximation

The advantages of sparse dictionaries are discussed, and an efficient algorithm for training them are presented, and the advantages of the proposed structure for 3-D image denoising are demonstrated.
...