• Corpus ID: 239998554

FedPrune: Towards Inclusive Federated Learning

@article{Munir2021FedPruneTI,
  title={FedPrune: Towards Inclusive Federated Learning},
  author={Muhammad Tahir Munir and Muhammad Mustansar Saeed and Mahad Ali and Zafar Ayyub Qazi and Ihsan Ayyub Qazi},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.14205}
}
Federated learning (FL) is a distributed learning technique that trains a shared model over distributed data in a privacypreservingmanner. Unfortunately, FL’s performance degrades when there is (i) variability in client characteristics in terms of computational and memory resources (system heterogeneity) and (ii) non-IID data distribution across clients (statistical heterogeneity). For example, slow clients get dropped in FL schemes, such as Federated Averaging (FedAvg), which not only limits… 

References

SHOWING 1-10 OF 35 REFERENCES
Agnostic Federated Learning
TLDR
This work proposes a new framework of agnostic federated learning, where the centralized model is optimized for any target distribution formed by a mixture of the client distributions, and shows that this framework naturally yields a notion of fairness.
Federated Optimization in Heterogeneous Networks
TLDR
This work introduces a framework, FedProx, to tackle heterogeneity in federated networks, and provides convergence guarantees for this framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work.
Fair Resource Allocation in Federated Learning
TLDR
This work proposes q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by fair resource allocation in wireless networks that encourages a more fair accuracy distribution across devices in federated networks.
FedMGDA+: Federated Learning meets Multi-objective Optimization
TLDR
This work forms federated learning as multi-objective optimization and proposes a new algorithm FedMGDA+ that is guaranteed to converge to Pareto stationary solutions and is simple to implement, has fewer hyperparameters to tune, and refrains from sacrificing the performance of any participating user.
Expanding the Reach of Federated Learning by Reducing Client Resource Requirements
TLDR
Federated Dropout is introduced, which allows users to efficiently train locally on smaller subsets of the global model and also provides a reduction in both client-to-server communication and local computation.
LEAF: A Benchmark for Federated Settings
TLDR
LEAF is proposed, a modular benchmarking framework for learning in federated settings that includes a suite of open-source federated datasets, a rigorous evaluation framework, and a set of reference implementations, all geared towards capturing the obstacles and intricacies of practical federated environments.
Towards Federated Learning at Scale: System Design
TLDR
A scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow is built, describing the resulting high-level design, and sketch some of the challenges and their solutions.
Communication-Efficient Learning of Deep Networks from Decentralized Data
TLDR
This work presents a practical method for the federated learning of deep networks based on iterative model averaging, and conducts an extensive empirical evaluation, considering five different model architectures and four datasets.
Fairness Without Demographics in Repeated Loss Minimization
TLDR
This paper develops an approach based on distributionally robust optimization (DRO), which minimizes the worst case risk over all distributions close to the empirical distribution and proves that this approach controls the risk of the minority group at each time step, in the spirit of Rawlsian distributive justice.
DropNet: Reducing Neural Network Complexity via Iterative Pruning
TLDR
DropNet, an iterative pruning method which prunes nodes/filters to reduce network complexity, is proposed and shown to be robust across diverse scenarios, including MLPs and CNNs using the MNIST, CIFAR-10 and Tiny ImageNet datasets.
...
1
2
3
4
...