Sparse coding and NMF

@article{Eggert2004SparseCA,
  title={Sparse coding and NMF},
  author={Julian Eggert and Eckhart Korner},
  journal={2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541)},
  year={2004},
  volume={4},
  pages={2529-2533 vol.4}
}
  • J. Eggert, E. Korner
  • Published 25 July 2004
  • Computer Science
  • 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541)
Non-negative matrix factorization (NMF) is a very efficient parameter-free method for decomposing multivariate data into strictly positive activations and basis vectors. However, the method is not suited for overcomplete representations, where usually sparse coding paradigms apply. We show how to merge the concepts of non-negative factorization with sparsity conditions. The result is a multiplicative algorithm that is comparable in efficiency to standard NMF, but that can be used to gain… 

Figures from this paper

Sparse shift-invariant NMF
TLDR
An algorithm called sparse shift-invariant NMF (ssiNMF) is proposed for learning possibly overcomplete shift- invariant features by incorporating a circulant property on the features and sparsity constraints on the activations.
Sparse and Transformation-Invariant Hierarchical NMF
TLDR
This work extends the standard HNMF by sparsity conditions and transformation-invariance in a natural, straightforward way, leading to a less redundant and sparse encoding of the input data.
Transformation-invariant representation and NMF
  • J. Eggert, H. Wersing, E. Korner
  • Computer Science, Mathematics
    2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541)
  • 2004
TLDR
This work shows that non-negative matrix factorization provides a transformation-invariant and compact encoding that is optimal for the given transformation constraints.
Approximate L0 constrained non-negative matrix and tensor factorization
TLDR
It is demonstrated that a full regularization path for the L1 norm regularized least squares NMF for fixed W can be calculated at the cost of an ordinary least squares solution based on a modification of the least angle regression and selection algorithm forming a non-negativity constrained LARS (NLARS).
Convolutive Non-Negative Matrix Factorisation with a Sparseness Constraint
Sparse nonnegative matrix factorization with ℓ0-constraints
Sparse nonnegative matrix factorization using ℓ0-constraints
TLDR
It is shown that classic NMF is a suited tool for ℓ0-sparse NMF algorithms, due to a property the authors call sparseness maintenance, and two NMf algorithms with ™0-sparseness constraints on the bases and the coefficient matrices are proposed.
Enforced Sparse Non-negative Matrix Factorization
TLDR
This article investigates a simple but powerful modification of the alternating least squares method of determining the NMF of a sparse matrix that enforces the generation of sparse intermediate and output matrices and demonstrates, empirically, that this method of enforcing sparsity in theNMF either preserves or improves both the accuracy of the resulting topic model and the convergence rate of the underlying algorithm.
Algorithms for Sparse Non-negative Tucker decompositions
TLDR
Proposed algorithms for sparse non-negative Tucker decompositions (SN-TUCKER) are demonstrated how the proposed algorithms are superior to existing algorithms for Tucker decomposition when indeed the data and interactions can be considered non- negative.
...
1
2
3
4
5
...

References

SHOWING 1-9 OF 9 REFERENCES
Non-negative sparse coding
  • P. Hoyer
  • Computer Science
    Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing
  • 2002
TLDR
A simple yet efficient multiplicative algorithm for finding the optimal values of the hidden components of non-negative sparse coding and how the basis vectors can be learned from the observed data is shown.
Algorithms for Non-negative Matrix Factorization
TLDR
Two different multiplicative algorithms for non-negative matrix factorization are analyzed and one algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence.
Learning the parts of objects by non-negative matrix factorization
TLDR
An algorithm for non-negative matrix factorization is demonstrated that is able to learn parts of faces and semantic features of text and is in contrast to other methods that learn holistic, not parts-based, representations.
Sparse Coding with Invariance Constraints
TLDR
A new approach to optimize the learning of sparse features under the constraints of explicit transformation symmetries imposed on the set of feature vectors, which obtains a less redundant basis feature set, compared to sparse coding approaches without invariances.
Emergence of simple-cell receptive field properties by learning a sparse code for natural images
TLDR
It is shown that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex.
Sparse coding with invariance constraint
  • In ICANN/lCONIP 2003 conference pmceedingr,
  • 2003
Learning the pans of objects with nonnegative matrix factorization
  • Narure
  • 1999
Sparse coding with invariance constraint% In ICANN/lCONIP 2003 conference pmceedingr, 2003. Conference proceedings paper
  • Sparse coding with invariance constraint% In ICANN/lCONIP 2003 conference pmceedingr, 2003. Conference proceedings paper