• Corpus ID: 218581566

Efficient Privacy Preserving Edge Computing Framework for Image Classification

  title={Efficient Privacy Preserving Edge Computing Framework for Image Classification},
  author={Omobayode Fagbohungbe and Sheikh Rufsan Reza and Xishuang Dong and Lijun Qian},
In order to extract knowledge from the large data collected by edge devices, traditional cloud based approach that requires data upload may not be feasible due to communication bandwidth limitation as well as privacy and security concerns of end users. To address these challenges, a novel privacy preserving edge computing framework is proposed in this paper for image classification. Specifically, autoencoder will be trained unsupervised at each edge device individually, then the obtained latent… 
Trusted AI in Multi-agent Systems: An Overview of Privacy and Security for Distributed Learning
A survey of the emerging security and privacy risks of distributed ML from a unique perspective of information exchange levels, which are defined according to the key steps of an ML process, i.e. the level of preprocessed data, learning models, and intermediate results.
OCTOPUS: Overcoming Performance and Privatization Bottlenecks in Distributed Learning
A new distributed/collaborative learning scheme to address communication overhead via latent compression, leveraging global data while providing privatization of local data without additional cost due to encryption or perturbation is introduced.
A Joint Energy and Latency Framework for Transfer Learning Over 5G Industrial Edge Networks
A transfer learning (TL)-enabled edge-CNN framework for 5G industrial edge networks with privacy-preserving characteristic that can achieve almost 85% prediction accuracy of the baseline by uploading only about 1% model parameters, for a compression ratio of $32$ of the autoencoder.


Privacy-preserving deep learning algorithm for big personal data analysis
Deep Leakage from Gradients
This work shows that it is possible to obtain the private training data from the publicly shared gradients, and names this leakage as Deep Leakage from Gradient and empirically validate the effectiveness on both computer vision and natural language processing tasks.
Differential Privacy Preservation in Deep Learning: Challenges, Opportunities and Solutions
The privacy attacks facing the deep learning model are introduced and they are presented from three aspects: membership inference, training data extraction, and model extracting.
Federated Learning Of Out-Of-Vocabulary Words
We demonstrate that a character-level recurrent neural network is able to learn out-of-vocabulary (OOV) words under federated learning settings, for the purpose of expanding the vocabulary of a
Autoencoder - a new method for keeping data privacy when analyzing videos of patients with motor dysfunction (P4.001)
Creating coded frame vectors with autoencoders are privacy-preserving vehicles for the transmission of video frame data to non-medical collaborators and provide a similar level of security to normal encryption - assuming that the encoder and decoder are not shared by the encoding party.
Federated Machine Learning
This work introduces a comprehensive secure federated-learning framework, which includes horizontal federated learning, vertical federatedLearning, and federated transfer learning, and provides a comprehensive survey of existing works on this subject.
Multi-Objective Evolutionary Federated Learning
Experimental results indicate that the proposed optimization method is able to find optimized neural network models that can not only significantly reduce communication costs but also improve the learning performance of federated learning compared with the standard fully connected neural networks.
This paper uses federated learning in a commercial, global-scale setting to train, evaluate and deploy a model to improve virtual keyboard search suggestion quality without direct access to the underlying user data.
Federated Learning for Mobile Keyboard Prediction
The federation algorithm, which enables training on a higher-quality dataset for this use case, is shown to achieve better prediction recall and the feasibility and benefit of training language models on client devices without exporting sensitive user data to servers are demonstrated.