• Publications
  • Influence
Coded computation over heterogeneous clusters
TLDR
This paper proposes Heterogeneous Coded Matrix Multiplication (HCMM) algorithm for performing distributed matrix multiplication over heterogeneous clusters that is provably asymptotically optimal and provides numerical results demonstrating significant speedups of up to 49% and 34% for HCMM in comparison to the “uncoded” and “homogeneous coded” schemes.
Coded Computing for Low-Latency Federated Learning Over Wireless Edge Networks
TLDR
This work proposes a novel coded computing framework, CodedFedL, that injects structured coding redundancy into federated learning for mitigating stragglers and speeding up the training procedure.
Coded Federated Learning
TLDR
This paper develops a novel coded computing technique for federated learning to mitigate the impact of stragglers and shows that CFL allows the global model to converge nearly four times faster when compared to an uncoded approach.
Coded Computing for Distributed Graph Analytics
TLDR
A coded computing framework is presented that systematically injects redundancy in the computation phase to enable coding opportunities in the communication phase thus reducing the communication load substantially.
Hierarchical Coded Gradient Aggregation for Learning at the Edge
TLDR
Aligned Minimum Distance Separable Coding (AMC) is proposed that achieves optimal CEH of Θ(1) for a given resiliency threshold by using MDS code over the gradient components, while achieving a CHM of ${\mathcal{O}}\left( {{n_e}} \right)$.
Coded Computation Over Heterogeneous Clusters
TLDR
This paper proposes heterogeneous coded matrix multiplication (HCMM) algorithm for performing distributed matrix multiplication over heterogeneous clusters that are provably asymptotically optimal for a broad class of processing time distributions and develops a heuristic algorithm for HCMM load allocation for the distributed implementation of budget-limited computation tasks.
Coded Computing for Distributed Machine Learning in Wireless Edge Network
TLDR
A coded computation framework, which utilizes statistical knowledge of resource heterogeneity to determine optimal encoding and load balancing of training data using Random Linear codes, while avoiding an explicit step for decoding gradients is proposed.
Coded Computing for Federated Learning at the Edge
TLDR
This work develops CodedFedL that addresses the difficult task of extending CFL to distributed non-linear regression and classification problems with multioutput labels and exploits distributed kernel embedding using random Fourier features that transforms the training task into distributed linear regression.
Mitigating Byzantine Attacks in Federated Learning
TLDR
This work proposes `DiverseFL' that jointly addresses three key challenges of Byzantine resilient federated learning -- non-IID data distribution across clients, variable Byzantine fault model, and generalization to non-convex and non-smooth optimization.
pSConv: A Pre-defined S parse Kernel Based Convolution for Deep CNNs
The high demand for computational and storage resources severely impedes the deployment of deep convolutional neural networks (CNNs) in limited resource devices. Recent CNN architectures have
...
1
2
...