Faithful Edge Federated Learning: Scalability and Privacy

@article{Zhang2021FaithfulEF,
  title={Faithful Edge Federated Learning: Scalability and Privacy},
  author={Meng Zhang and Ermin Wei and Randall A. Berry},
  journal={IEEE Journal on Selected Areas in Communications},
  year={2021},
  volume={39},
  pages={3790-3804}
}
Federated learning enables machine learning algorithms to be trained over decentralized edge devices without requiring the exchange of local datasets. Successfully deploying federated learning requires ensuring that agents (e.g., mobile devices) faithfully execute the intended algorithm, which has been largely overlooked in the literature. In this study, we first use risk bounds to analyze how the key feature of federated learning, unbalanced and non-i.i.d. data, affects agents’ incentives to… 

Figures from this paper

A Smart Contract based Crowdfunding Mechanism for Hierarchical Federated Learning

TLDR
A smart contract based trust crowdfunding mechanism for HFL is developed, which enables multiple model owners to obtain a crowdfunding model with high social utility for multiple crowdfunding participants and ensures the authenticity and trustworthiness of the crowdfunding process.

A Platform-Free Proof of Federated Learning Consensus Mechanism for Sustainable Blockchains

TLDR
By devising a novel block structure, new transaction types, and credit-based incentives, platform-free proof of federated learning (PF-PoFL) allows efficiency and effectiveness of AI task outsourcing, federated mining, model evaluation, and reward distribution in a fully decentralized manner, while resisting spoo fing and Sybil attacks.

Server-Side Local Gradient Averaging and Learning Rate Acceleration for Scalable Split Learning

TLDR
This work first identifies the fundamental bottlenecks of SL, and thereby proposes a scalable SL framework, coined SGLR, which achieves higher accuracy than other baseline SL methods including SplitFed, which is even on par with FL consuming higher energy and communication costs.

Tackling System and Statistical Heterogeneity for Federated Learning with Adaptive Client Sampling

TLDR
This paper designs an adaptive client sampling algorithm that tackles both system and statistical heterogeneity to minimize the wall-clock convergence time, and obtains a new tractable convergence bound for FL algorithms with arbitrary client sampling probabilities.

References

SHOWING 1-10 OF 58 REFERENCES

A Survey of Incentive Mechanism Design for Federated Learning

TLDR
This article surveys the incentive mechanism design for federated learning and presents a taxonomy of existing incentive mechanisms, which are subsequently discussed in depth by comparing and contrasting different approaches.

A Hybrid Approach to Privacy-Preserving Federated Learning

TLDR
This paper presents an alternative approach that utilizes both differential privacy and SMC to balance these trade-offs and enables the growth of noise injection as the number of parties increases without sacrificing privacy while maintaining a pre-defined rate of trust.

Incentive Mechanism for Reliable Federated Learning: A Joint Optimization Approach to Combining Reputation and Contract Theory

TLDR
This article introduces reputation as the metric to measure the reliability and trustworthiness of the mobile devices, then designs a reputation-based worker selection scheme for reliable federated learning by using a multiweight subjective logic model and leverages the blockchain to achieve secure reputation management for workers with nonrepudiation and tamper-resistance properties.

Pain-FL: Personalized Privacy-Preserving Incentive for Federated Learning

TLDR
This paper proposes a contract-based personalized privacy-preserving incentive for FL, named Pain-FL, to provide customized payments for workers with different privacy preferences as compensation for privacy leakage cost while ensuring satisfactory convergence performance of FL models.

A Learning-Based Incentive Mechanism for Federated Learning

TLDR
The incentive mechanism for federated learning to motivate edge nodes to contribute model training is studied and a deep reinforcement learning-based (DRL) incentive mechanism has been designed to determine the optimal pricing strategy for the parameter server and the optimal training strategies for edge nodes.

Toward an Automated Auction Framework for Wireless Federated Learning Services Market

TLDR
This paper proposes an auction based market model for incentivizing data owners to participate in federated learning and designs an approximate strategy-proof mechanism which guarantees the truthfulness, individual rationality, and computational efficiency.

Hierarchical Incentive Mechanism Design for Federated Machine Learning in Mobile Networks

TLDR
A federated learning (FL)-based privacy-preserving approach to facilitate collaborative machine learning among multiple model owners in mobile crowdsensing and considers the inherent hierarchical structure of the involved entities to propose a hierarchical incentive mechanism framework.

Federated Machine Learning: Concept and Applications

TLDR
This work proposes building data networks among organizations based on federated mechanisms as an effective solution to allow knowledge to be shared without compromising user privacy.

A Sustainable Incentive Scheme for Federated Learning

TLDR
The FL incentivizer (FLI) dynamically divides a given budget in a context-aware manner among data owners in a federation by jointly maximizing the collective utility while minimizing the inequality among the data owners, in terms of the payoff received and the waiting time for receiving payoffs.

Provably Secure Federated Learning against Malicious Clients

TLDR
This work shows that the label predicted by the ensemble global model for a testing example is provably not affected by a bounded number of malicious clients, and demonstrates that the derived bound is tight.
...