• Corpus ID: 244729332

Privacy-Preserving Serverless Edge Learning with Decentralized Small Data

  title={Privacy-Preserving Serverless Edge Learning with Decentralized Small Data},
  author={Shih-Chun Lin and Chia-Hung Lin},
In the last decade, data-driven algorithms outperformed traditional optimization-based algorithms in many research areas, such as computer vision, natural language processing, etc. However, extensive data usages bring a new challenge or even threat to deep learning algorithms, i.e., privacypreserving. Distributed training strategies have recently become a promising approach to ensure data privacy when training deep models. This paper extends conventional serverless platforms with serverless… 

Figures and Tables from this paper



Communication-Efficient Distributed Deep Learning: A Comprehensive Survey

A comprehensive survey of the communication-efficient distributed training algorithms in both system-level and algorithmic-level optimizations is provided, which provides the readers to understand what algorithms are more efficient under specific distributed environments and extrapolate potential directions for further optimizations.

A Quantitative Survey of Communication Optimizations in Distributed Deep Learning

It is shown that the DL models with low model intensity are difficult to scale out even with the best available lossless algorithm over 100Gb/s IB; and the system architecture and scheduling algorithms have a critical impact on the scaling property.

Accelerating Deep Learning Systems via Critical Set Identification and Model Compression

ClipDL is proposed, which accelerates the deep learning systems by simultaneously decreasing the number of model parameters as well as reducing the computations on critical data only, and is implemented on Spark and BigDL.

When Machine Learning Meets Privacy: A Survey and Outlook

The state of the art in privacy issues and solutions for machine learning is surveyed and future research directions in this field are pointed out.

When Serverless Computing Meets Edge Computing: Architecture, Challenges, and Open Issues

This article proposes the network architecture and layered structure of serverless edge computing networks from the perspective of networking, and presents the communication process, as well as the implementation and deployment.

TULVCAN: Terahertz Ultra-broadband Learning Vehicular Channel-Aware Networking

This THz Ultra-broadband Learning Vehicular Channel-Aware Networking (TULVCAN) work successfully achieves effective THz spectrum learning and hence allows frequency-agile access.

Accelerating TensorFlow with Adaptive RDMA-Based gRPC

This paper proposes a unified approach to have a single gRPC runtime in TensorFlow with Adaptive and efficient RDMA protocols, and proposes designs such as hybrid communication protocols, message pipelining and coalescing, zero-copy transmission etc to make the runtime be adaptive to different message sizes for Deep Learning workloads.

Machine Learning and Deep Learning Based Traffic Classification and Prediction in Software Defined Networking

This paper reviews existing proposal for using ML in an SDN context for traffic measurement (specifically, classification) and traffic prediction and highlights approaches that use Deep learning in traffic prediction, which seems to have been mostly untapped by existing surveys.

TensorFlow: A system for large-scale machine learning

The TensorFlow dataflow model is described and the compelling performance that Tensor Flow achieves for several real-world applications is demonstrated.

Generalizing from a Few Examples: A Survey on Few-Shot Learning

A thorough survey to fully understand Few-Shot Learning (FSL), and categorizes FSL methods from three perspectives: data, which uses prior knowledge to augment the supervised experience; model, which used to reduce the size of the hypothesis space; and algorithm, which using prior knowledgeto alter the search for the best hypothesis in the given hypothesis space.