NeNMF: An Optimal Gradient Method for Nonnegative Matrix Factorization

@article{Guan2012NeNMFAO,
  title={NeNMF: An Optimal Gradient Method for Nonnegative Matrix Factorization},
  author={Naiyang Guan and Dacheng Tao and Zhigang Luo and Bo Yuan},
  journal={IEEE Transactions on Signal Processing},
  year={2012},
  volume={60},
  pages={2882-2898}
}
Nonnegative matrix factorization (NMF) is a powerful matrix decomposition technique that approximates a nonnegative matrix by the product of two low-rank nonnegative matrix factors. It has been widely applied to signal processing, computer vision, and data mining. Traditional NMF solvers include the multiplicative update rule (MUR), the projected gradient method (PG), the projected nonnegative least squares (PNLS), and the active set method (AS). However, they suffer from one or some of the… 
Adaptive Method for Nonsmooth Nonnegative Matrix Factorization
TLDR
Simulations using computer-generated data and real-world data show the advantages of the proposed Ans-NMF method over the state-of-the-art methods.
Constraint-Relaxation Approach for Nonnegative Matrix Factorization: A Case Study
TLDR
In the experiments, it's showed that this approach can reach a nonnegative matrix factorization with lower reconstruction error than conventional methods, and the technique for rank=2 exact NMF works well.
Limited-Memory Fast Gradient Descent Method for Graph Regularized Nonnegative Matrix Factorization
TLDR
An efficient limited-memory FGD (L-FGD) method is proposed for optimizing graph regularized nonnegative matrix factorization and its clustering performance is validated for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora.
Efficient Rank-one Residue Approximation Method for Graph Regularized Non-negative Matrix Factorization
TLDR
A new efficient GNMF solver called rank-one residue approximation RRA, which is theoretical and empirically proven to converge rapidly to a stationary point and to confirm the stationarity of the solution obtained by RRA.
A Fast Non-Smooth Nonnegative Matrix Factorization for Learning Sparse Representation
TLDR
A fast NsNMF (FNsNMf) algorithm is proposed to speed up Ns NMF by solving a proximal function, designed based on the Lipschitz constant and can be solved through utilizing a constructed fast convergent sequence.
Online Nonnegative Matrix Factorization With Robust Stochastic Approximation
TLDR
An efficient online RSA-NMF algorithm that learns NMF in an incremental fashion and outperforms the existing online NMF (ONMF) algorithms in terms of efficiency and proves that OR- NMF almost surely converges to a local optimal solution by using the quasi-martingale.
A non-convex optimization framework for large-scale low-rank matrix factorization
TLDR
This paper combines the conjugate gradient (CG) method with the Barzilai and Borwein (BB) gradient method, and proposes a BB scaling CG method for NMF problems, which results in a new algorithm that can significantly improve the CPU time, efficiency, the number of function evaluation.
Gauss-Seidel HALS Algorithm for Nonnegative Matrix Factorization with Sparseness and Smoothness Constraints
TLDR
This paper proposes a new algorithm called the Gauss-Seidel HALS algorithm that decreases the objective value monotonically and proves that it has the global convergence property in the sense of Zangwill.
Large-Cone Nonnegative Matrix Factorization
TLDR
This paper introduces two large-cone penalties for NMF and proposes large- cone NMF (LCNMF) algorithms, which will obtain bases comprising a larger simplicial cone, and has three advantages: the empirical reconstruction error of LCNMF could mostly be smaller, the generalization ability of the proposed algorithm is much more powerful and the obtained bases have a low-overlapping property.
DSANLS: Accelerating Distributed Nonnegative Matrix Factorization via Sketching
TLDR
This paper proposes a distributed sketched alternating nonnegative least squares (DSANLS) framework for NMF, which utilizes a matrix sketching technique to reduce the size of non negative least squares subproblems in each iteration for U and V.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 38 REFERENCES
Alternating projected Barzilai-Borwein methods for nonnegative matrix factorization.
TLDR
Four algorithms for solving the nonsmooth nonnegative matrix factorization (nsNMF) problems are proposed and a numerical comparison between the APBB2 method and the Hierarchical Alternating Least Squares (HAL S)/Rank-one Residue Iteration (RRI) method is provided.
Non-negative Matrix Factorization with Quasi-Newton Optimization
TLDR
This work derived a relatively simple second-order quasi-Newton method for NMF: so-called Amari alpha divergence, which has been extensively tested for blind source separation problems, both for signals and images.
Toward Faster Nonnegative Matrix Factorization: A New Algorithm and Comparisons
TLDR
This paper presents a novel algorithm for NMF based on the ANLS framework that builds upon the block principal pivoting method for the nonnegativity constrained least squares problem that overcomes some limitations of active set methods.
Fast Newton-type Methods for the Least Squares Nonnegative Matrix Approximation Problem
TLDR
New and improved algorithms for the least-squares NNMA problem are presented which are not only theoretically well-founded, but also overcome many of the deficiencies of other methods, and use non-diagonal gradient scaling to obtain rapid convergence.
Non-negative Matrix Factorization on Manifold
TLDR
This paper construct an affinity graph to encode the geometrical information and seek a matrix factorization which respects the graph structure and demonstrates the success of this novel algorithm by applying it on real world problems.
Manifold Regularized Discriminative Nonnegative Matrix Factorization With Fast Gradient Descent
TLDR
The manifold regularization and the margin maximization to NMF are introduced and the manifold regularized discriminative NMF (MD-NMF) is obtained to overcome the aforementioned problems.
On the Convergence of Multiplicative Update Algorithms for Nonnegative Matrix Factorization
  • Chih-Jen Lin
  • Computer Science, Mathematics
    IEEE Transactions on Neural Networks
  • 2007
TLDR
This paper proposes slight modifications of existing updates and proves their convergence, and techniques invented in this paper may be applied to prove the convergence for other bound-constrained optimization problems.
Projected Gradient Methods for Nonnegative Matrix Factorization
TLDR
This letter proposes two projected gradient methods for nonnegative matrix factorization, both of which exhibit strong optimization properties and discuss efficient implementations and demonstrate that one of the proposed methods converges faster than the popular multiplicative update approach.
An accelerated gradient method for trace norm minimization
TLDR
This paper exploits the special structure of the trace norm, based on which it is proposed an extended gradient algorithm that converges as O(1/k) and proposes an accelerated gradient algorithm, which achieves the optimal convergence rate of O( 1/k2) for smooth problems.
Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values†
A new variant ‘PMF’ of factor analysis is described. It is assumed that X is a matrix of observed data and σ is the known matrix of standard deviations of elements of X. Both X and σ are of
...
1
2
3
4
...