• Corpus ID: 231846823

DeEPCA: Decentralized Exact PCA with Linear Convergence Rate

  title={DeEPCA: Decentralized Exact PCA with Linear Convergence Rate},
  author={Haishan Ye and Tong Zhang},
Due to the rapid growth of smart agents such as weakly connected computational nodes and sensors, developing decentralized algorithms that can perform computations on local agents becomes a major research direction. This paper considers the problem of decentralized principal components analysis (PCA), which is a statistical method widely used for data analysis. We introduce a technique called subspace tracking to reduce the communication cost, and apply it to power iterations. This leads to a… 

Figures from this paper

A Communication-Efficient and Privacy-Aware Distributed Algorithm for Sparse PCA
As a prominent variant of principal component analysis (PCA), sparse PCA attempts to find sparse loading vectors when conducting dimension reduction. This paper aims to calculate sparse PCA through
FAST-PCA: A Fast and Exact Algorithm for Distributed Principal Component Analysis
A distributed PCA algorithm called FAST-PCA (Fast and exAct diSTributed PCA) is proposed that is efficient in terms of communication and can be proved to converge linearly and exactly to the principal components that lead to dimension reduction as well as uncorrelated features.
Decentralized Riemannian Gradient Descent on the Stiefel Manifold
DRGTA is the first decentralized algorithm with exact convergence for distributed optimization on Stiefel manifold and uses multi-step consensus to preserve the iteration in the local consensus region.
Decentralized Optimization Over the Stiefel Manifold by an Approximate Augmented Lagrangian Function
  • Lei Wang, Xin Liu
  • Computer Science, Mathematics
  • 2021
This paper proposes a decentralized algorithm, called DESTINY, which only invokes a single round of communications per iteration of the Stiefel manifold, and combines gradient tracking techniques with a novel approximate augmented Lagrangian function.
Distributed Principal Subspace Analysis for Partitioned Big Data: Algorithms, Analysis, and Implementation
This paper revisits the problem of distributed PSA/PCA under the general framework of an arbitrarily connected network of machines that lacks a central server to study the interplay between network topology and communications cost as well as the effects of straggler machines on the proposed algorithms.


Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent
This paper studies a D-PSGD algorithm and provides the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent.
Multi-consensus Decentralized Accelerated Gradient Descent
A novel algorithm is proposed that can achieve near optimal communication complexity, matching the known lower bound up to a logarithmic factor of the condition number of the problem.
Harnessing smoothness to accelerate distributed optimization
  • Guannan Qu, N. Li
  • Mathematics, Computer Science
    2016 IEEE 55th Conference on Decision and Control (CDC)
  • 2016
This paper proposes a distributed algorithm that, despite using the same amount of communication per iteration as DGD, can effectively harnesses the function smoothness and converge to the optimum with a rate of O(1/t) if the objective function is strongly convex and smooth.
Accelerated linear iterations for distributed averaging
It is shown that the augmented linear iteration can solve the distributed averaging problem faster than the original linear iteration, but the adjustable parameter must be chosen carefully.
The decentralized estimation of the sample covariance
This work shows how a completely distributed scheme based on near neighbors communications is feasible, and applies the proposed method to the estimation of the direction of arrival of a signal source.
Distributed adaptive estimation of covariance matrix eigenvectors in wireless sensor networks with application to distributed PCA
A distributed adaptive algorithm to estimate the eigenvectors corresponding to the Q largest or smallest eigenvalues of the network-wide sensor signal covariance matrix in a wireless sensor network and provides convergence proofs, as well as numerical simulations to demonstrate the convergence and optimality of the algorithm.
Performance Analysis of the Decentralized Eigendecomposition and ESPRIT Algorithm
It is shown that the decentralized power method is not an asymptotically consistent estimator of the eigenvectors of the true measurement covariance matrix unless the averaging consensus protocol is carried out over an infinitely large number of iterations.
Fast linear iterations for distributed averaging
This work considers the problem of finding a linear iteration that yields distributed averaging consensus over a network, i.e., that asymptotically computes the average of some initial values given at the nodes, and gives several extensions and variations on the basic problem.
Fast and privacy preserving distributed low-rank regression
This paper proposes a fast and privacy preserving distributed algorithm for handling low-rank regression problems with nuclear norm constraint, called the fast DeFW algorithm, which incorporates a carefully designed decentralized power method step to reduce the complexity by distributed computation over network.
Decentralized Riemannian Gradient Descent on the Stiefel Manifold
DRGTA is the first decentralized algorithm with exact convergence for distributed optimization on Stiefel manifold and uses multi-step consensus to preserve the iteration in the local consensus region.