Corpus ID: 231846823

DeEPCA: Decentralized Exact PCA with Linear Convergence Rate

@article{Ye2021DeEPCADE,
  title={DeEPCA: Decentralized Exact PCA with Linear Convergence Rate},
  author={Haishan Ye and Tong Zhang},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.03990}
}
Due to the rapid growth of smart agents such as weakly connected computational nodes and sensors, developing decentralized algorithms that can perform computations on local agents becomes a major research direction. This paper considers the problem of decentralized principal components analysis (PCA), which is a statistical method widely used for data analysis. We introduce a technique called subspace tracking to reduce the communication cost, and apply it to power iterations. This leads to a… Expand

Figures from this paper

A Communication-Efficient and Privacy-Aware Distributed Algorithm for Sparse PCA
As a prominent variant of principal component analysis (PCA), sparse PCA attempts to find sparse loading vectors when conducting dimension reduction. This paper aims to calculate sparse PCA throughExpand
FAST-PCA: A Fast and Exact Algorithm for Distributed Principal Component Analysis
TLDR
A distributed PCA algorithm called FAST-PCA (Fast and exAct diSTributed PCA) is proposed that is efficient in terms of communication and can be proved to converge linearly and exactly to the principal components that lead to dimension reduction as well as uncorrelated features. Expand
Decentralized Riemannian Gradient Descent on the Stiefel Manifold
TLDR
DRGTA is the first decentralized algorithm with exact convergence for distributed optimization on Stiefel manifold and uses multi-step consensus to preserve the iteration in the local consensus region. Expand
Distributed Principal Subspace Analysis for Partitioned Big Data: Algorithms, Analysis, and Implementation
TLDR
This paper revisits the problem of distributed PSA/PCA under the general framework of an arbitrarily connected network of machines that lacks a central server to study the interplay between network topology and communications cost as well as the effects of straggler machines on the proposed algorithms. Expand

References

SHOWING 1-10 OF 33 REFERENCES
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent
TLDR
This paper studies a D-PSGD algorithm and provides the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent. Expand
Multi-consensus Decentralized Accelerated Gradient Descent
TLDR
A novel algorithm is proposed that can achieve near optimal communication complexity, matching the known lower bound up to a logarithmic factor of the condition number of the problem. Expand
Harnessing smoothness to accelerate distributed optimization
  • Guannan Qu, N. Li
  • Mathematics, Computer Science
  • 2016 IEEE 55th Conference on Decision and Control (CDC)
  • 2016
TLDR
This paper proposes a distributed algorithm that, despite using the same amount of communication per iteration as DGD, can effectively harnesses the function smoothness and converge to the optimum with a rate of O(1/t) if the objective function is strongly convex and smooth. Expand
Accelerated linear iterations for distributed averaging
TLDR
It is shown that the augmented linear iteration can solve the distributed averaging problem faster than the original linear iteration, but the adjustable parameter must be chosen carefully. Expand
The decentralized estimation of the sample covariance
TLDR
This work shows how a completely distributed scheme based on near neighbors communications is feasible, and applies the proposed method to the estimation of the direction of arrival of a signal source. Expand
Distributed adaptive estimation of covariance matrix eigenvectors in wireless sensor networks with application to distributed PCA
TLDR
A distributed adaptive algorithm to estimate the eigenvectors corresponding to the Q largest or smallest eigenvalues of the network-wide sensor signal covariance matrix in a wireless sensor network and provides convergence proofs, as well as numerical simulations to demonstrate the convergence and optimality of the algorithm. Expand
Performance Analysis of the Decentralized Eigendecomposition and ESPRIT Algorithm
TLDR
It is shown that the decentralized power method is not an asymptotically consistent estimator of the eigenvectors of the true measurement covariance matrix unless the averaging consensus protocol is carried out over an infinitely large number of iterations. Expand
Fast linear iterations for distributed averaging
TLDR
This work considers the problem of finding a linear iteration that yields distributed averaging consensus over a network, i.e., that asymptotically computes the average of some initial values given at the nodes, and gives several extensions and variations on the basic problem. Expand
Fast and privacy preserving distributed low-rank regression
TLDR
This paper proposes a fast and privacy preserving distributed algorithm for handling low-rank regression problems with nuclear norm constraint, called the fast DeFW algorithm, which incorporates a carefully designed decentralized power method step to reduce the complexity by distributed computation over network. Expand
Decentralized Riemannian Gradient Descent on the Stiefel Manifold
TLDR
DRGTA is the first decentralized algorithm with exact convergence for distributed optimization on Stiefel manifold and uses multi-step consensus to preserve the iteration in the local consensus region. Expand
...
1
2
3
4
...