• Corpus ID: 245769945

Online nonnegative CP-dictionary learning for Markovian data

  title={Online nonnegative CP-dictionary learning for Markovian data},
  author={Hanbaek Lyu and Christopher Strohmeier and Deanna Needell},
Online Tensor Factorization (OTF) is a fundamental tool in learning low-dimensional interpretable features from streaming multi-modal data. While various algorithmic and theoretical aspects of OTF have been investigated recently, a general convergence guarantee to stationary points of the objective function without any incoherence or sparsity assumptions is still lacking even for the i.i.d. case. In this work, we introduce a novel algorithm that learns a CANDECOMP/PARAFAC (CP) basis from a… 
2 Citations

Figures from this paper


Stochastic block majorization-minimization is introduced, where the surrogates can now be only block multi-convex and a single block is optimized at a time within a diminishing radius, providing first convergence rate bounds for various online matrix and tensor decomposition algorithms under a general Markovian data setting.

Stochastic regularized majorization-minimization with weakly convex and multi-convex surrogates

The analysis shows that SMM could be faster than SGD in optimizing for the empirical loss and could match the same optimal rate as SGD for the expected loss, and provides first convergence rate bounds for various online matrix and tensor decomposition algorithms under a general Markovian data setting.



Online matrix factorization for Markovian data and applications to Network Dictionary Learning

This paper shows that the well-known OMF algorithm for i.d. stream of data, proposed in mairal2010online, in fact converges almost surely to the set of critical points of the expected loss function, even when the data matrices form a Markov chain satisfying a mild mixing condition, and extends the convergence result to the case when one can only approximately solve each step of the optimization problems in the algorithm.

Online Nonnegative Matrix Factorization with General Divergences

It is proved that the sequence of learned dictionaries converges almost surely to the set of critical points of the expected loss function, by leveraging the theory of stochastic approximations and projected dynamical systems.

Provable Online CP/PARAFAC Decomposition of a Structured Tensor via Dictionary Learning

This work develops a provable algorithm for online structured tensor factorization, wherein one of the factors obeys some incoherence conditions, and the others are sparse, which is suitable for real-world tasks.

Learning Overcomplete Latent Variable Models through Tensor Methods

The main tool is a new algorithm for tensor decomposition that works in the overcomplete regime, and a simple initialization algorithm based on SVD of the tensor slices is proposed, and guarantees are provided under the stricter condition that k ≤ βd.

Streaming Tensor Factorization for Infinite Data Sources

CP-stream is presented, an algorithm for streaming sparse tensor factorization in the model of the canonical polyadic decomposition which does not grow linearly in time or space, and is thus practical for long-term streaming.

Probabilistic Streaming Tensor Decomposition

This work proposes POST, a PrObabilistic Streaming Tensor decomposition algorithm, which enables real-time updates and predictions upon receiving new tensor entries, and supports dynamic growth of all the modes.

Online Convex Dictionary Learning

This work proposes a novel low-complexity, batch online convex dictionary learning algorithm, which sequentially processes small batches of data maintained in a fixed amount of storage space, and produces meaningful dictionaries that satisfy convexity constraints.

Accelerating Online CP Decompositions for Higher Order Tensors

This work proposes an efficient online algorithm that can incrementally track the CP decompositions of dynamic tensors with an arbitrary number of dimensions and shows not only significantly better decomposition quality, but also better performance in terms of stability, efficiency and scalability.

D4L: Decentralized Dynamic Discriminative Dictionary Learning

This work considers discriminative dictionary learning in a distributed online setting, where a network of agents aims to learn, from sequential observations, statistical model parameters jointly with data-driven signal representations, and considers the use of a block variant of the Arrow–Hurwicz saddle point algorithm to solve this problem.

Online tensor methods for learning latent variable models

An online tensor decomposition based approach for two latent variable modeling problems namely, community detection and topic modeling, in which the latent communities that the social actors in social networks belong to are learned.