Towards Fair and Privacy-Preserving Federated Deep Models

@article{Lyu2020TowardsFA,
  title={Towards Fair and Privacy-Preserving Federated Deep Models},
  author={Lingjuan Lyu and Jiangshan Yu and Karthik Nandakumar and Yitong Li and Xingjun Ma and Jiong Jin and Han Yu and Kee Siong Ng},
  journal={IEEE Transactions on Parallel and Distributed Systems},
  year={2020},
  volume={31},
  pages={2524-2541}
}
  • Lingjuan Lyu, Jiangshan Yu, +5 authors Kee Siong Ng
  • Published 2020
  • Computer Science, Mathematics
  • IEEE Transactions on Parallel and Distributed Systems
  • The current standalone deep learning framework tends to result in overfitting and low utility. This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates. Server-based solutions are prone to the problem of a single-point-of-failure. In this respect, collaborative learning frameworks, such as federated… CONTINUE READING

    Citations

    Publications citing this paper.
    SHOWING 1-2 OF 2 CITATIONS

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 43 REFERENCES

    Privacy-preserving deep learning

    • Reza Shokri, Vitaly Shmatikov
    • Computer Science
    • 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)
    • 2015
    VIEW 17 EXCERPTS
    HIGHLY INFLUENTIAL

    SecureML: A System for Scalable Privacy-Preserving Machine Learning

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Differentially Private Releasing via Deep Generative Model

    VIEW 3 EXCERPTS
    HIGHLY INFLUENTIAL

    When Machine Learning Meets Blockchain: A Decentralized, Privacy-preserving and Secure Design

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    A Fairness-aware Incentive Scheme for Federated Learning

    VIEW 2 EXCERPTS