# Sparse coding and NMF

@article{Eggert2004SparseCA, title={Sparse coding and NMF}, author={Julian Eggert and Eckhart Korner}, journal={2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541)}, year={2004}, volume={4}, pages={2529-2533 vol.4} }

Non-negative matrix factorization (NMF) is a very efficient parameter-free method for decomposing multivariate data into strictly positive activations and basis vectors. However, the method is not suited for overcomplete representations, where usually sparse coding paradigms apply. We show how to merge the concepts of non-negative factorization with sparsity conditions. The result is a multiplicative algorithm that is comparable in efficiency to standard NMF, but that can be used to gain…

## 307 Citations

Sparse shift-invariant NMF

- Computer Science2008 IEEE Southwest Symposium on Image Analysis and Interpretation
- 2008

An algorithm called sparse shift-invariant NMF (ssiNMF) is proposed for learning possibly overcomplete shift- invariant features by incorporating a circulant property on the features and sparsity constraints on the activations.

Sparse and Transformation-Invariant Hierarchical NMF

- Computer ScienceICANN
- 2007

This work extends the standard HNMF by sparsity conditions and transformation-invariance in a natural, straightforward way, leading to a less redundant and sparse encoding of the input data.

Transformation-invariant representation and NMF

- Computer Science, Mathematics2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541)
- 2004

This work shows that non-negative matrix factorization provides a transformation-invariant and compact encoding that is optimal for the given transformation constraints.

Approximate L0 constrained non-negative matrix and tensor factorization

- Computer Science2008 IEEE International Symposium on Circuits and Systems
- 2008

It is demonstrated that a full regularization path for the L1 norm regularized least squares NMF for fixed W can be calculated at the cost of an ordinary least squares solution based on a modification of the least angle regression and selection algorithm forming a non-negativity constrained LARS (NLARS).

Convolutive Non-Negative Matrix Factorisation with a Sparseness Constraint

- Computer Science2006 16th IEEE Signal Processing Society Workshop on Machine Learning for Signal Processing
- 2006

Sparse nonnegative matrix factorization with ℓ0-constraints

- Computer ScienceNeurocomputing
- 2012

Sparse nonnegative matrix factorization using ℓ0-constraints

- Computer Science2010 IEEE International Workshop on Machine Learning for Signal Processing
- 2010

It is shown that classic NMF is a suited tool for ℓ0-sparse NMF algorithms, due to a property the authors call sparseness maintenance, and two NMf algorithms with ™0-sparseness constraints on the bases and the coefficient matrices are proposed.

Enforced Sparse Non-negative Matrix Factorization

- Computer Science2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)
- 2016

This article investigates a simple but powerful modification of the alternating least squares method of determining the NMF of a sparse matrix that enforces the generation of sparse intermediate and output matrices and demonstrates, empirically, that this method of enforcing sparsity in theNMF either preserves or improves both the accuracy of the resulting topic model and the convergence rate of the underlying algorithm.

Algorithms for Sparse Non-negative Tucker decompositions

- Computer Science
- 2008

Proposed algorithms for sparse non-negative Tucker decompositions (SN-TUCKER) are demonstrated how the proposed algorithms are superior to existing algorithms for Tucker decomposition when indeed the data and interactions can be considered non- negative.

## References

SHOWING 1-9 OF 9 REFERENCES

Non-negative sparse coding

- Computer ScienceProceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing
- 2002

A simple yet efficient multiplicative algorithm for finding the optimal values of the hidden components of non-negative sparse coding and how the basis vectors can be learned from the observed data is shown.

Algorithms for Non-negative Matrix Factorization

- Computer ScienceNIPS
- 2000

Two different multiplicative algorithms for non-negative matrix factorization are analyzed and one algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence.

Learning the parts of objects by non-negative matrix factorization

- Computer ScienceNature
- 1999

An algorithm for non-negative matrix factorization is demonstrated that is able to learn parts of faces and semantic features of text and is in contrast to other methods that learn holistic, not parts-based, representations.

Sparse Coding with Invariance Constraints

- Computer ScienceICANN
- 2003

A new approach to optimize the learning of sparse features under the constraints of explicit transformation symmetries imposed on the set of feature vectors, which obtains a less redundant basis feature set, compared to sparse coding approaches without invariances.

Emergence of simple-cell receptive field properties by learning a sparse code for natural images

- Computer ScienceNature
- 1996

It is shown that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex.

Sparse coding with invariance constraint

- In ICANN/lCONIP 2003 conference pmceedingr,
- 2003

Learning the pans of objects with nonnegative matrix factorization

- Narure
- 1999

Sparse coding with invariance constraint% In ICANN/lCONIP 2003 conference pmceedingr, 2003. Conference proceedings paper

- Sparse coding with invariance constraint% In ICANN/lCONIP 2003 conference pmceedingr, 2003. Conference proceedings paper