Privacy-Preserving Coded Mobile Edge Computing for Low-Latency Distributed Inference

@article{Schlegel2021PrivacyPreservingCM,
  title={Privacy-Preserving Coded Mobile Edge Computing for Low-Latency Distributed Inference},
  author={Reent Schlegel and Siddhartha Kumar and Eirik Rosnes and Alexandre Graell i Amat},
  journal={IEEE Journal on Selected Areas in Communications},
  year={2021},
  volume={40},
  pages={788-799}
}
We consider a mobile edge computing scenario where a number of devices want to perform a linear inference <inline-formula> <tex-math notation="LaTeX">${W}{x} $ </tex-math></inline-formula> on some local data <inline-formula> <tex-math notation="LaTeX">$ {x}$ </tex-math></inline-formula> given a network-side matrix <inline-formula> <tex-math notation="LaTeX">$ {W}$ </tex-math></inline-formula>. The computation is performed at the network edge over a number of edge servers. We propose a coding… 

Figures from this paper

Privacy-Preserving Edge Caching: A Probabilistic Approach

A chunk-based joint probabilistic caching (JPC) approach is employed to mislead an adversary eavesdropping on the communication inside an EC and maximizing the adversary's error in estimating the requested file and requesting cache.

Privacy Preservation Among Honest-but-Curious Edge Nodes: A Survey

The concepts of user privacy and edge computing are introduced and a state-of-the-art overview of current literature as it relates to privacy preservation in honest-but-curious edge computing is provided.

Privacy-Preserving Task Offloading Strategies in MEC

Simulated experimental results demonstrate that this scheme is effective in protecting the location privacy and association privacy of mobile devices and reducing the average completion time of tasks compared with the-state-of-art techniques.

CodedPaddedFL and CodedSecAgg: Straggler Mitigation and Secure Aggregation in Federated Learning

Two novel federated learning schemes that mitigate the effect of straggling devices by introducing redundancy on the devices' data across the network and provides straggler resiliency and robustness against model inversion attacks are presented.

Coding for Straggler Mitigation in Federated Learning

We present a novel coded federated learning (FL) scheme for linear regression that mitigates the effect of straggling devices while retaining the privacy level of conventional FL. The proposed scheme

Dynamic Scanning Desensitization of Sensitive Data Based on Low Code Modeling Language Technology

  • Anni HuangChunzhi MengJiacheng FuJunbing PanMiaoru Su
  • Computer Science
    2022 International Conference on Knowledge Engineering and Communication Systems (ICKES)
  • 2022
This paper studies the method of dynamic scanning desensitization of sensitive data based on low code modeling language technology, and uses low code technology to build a data desensitized system, which meets the user's needs and has good performance.

References

SHOWING 1-10 OF 32 REFERENCES

“Short-Dot”: Computing Large Linear Transforms Distributedly Using Coded Short Dot Products

The key novelty in this work is that in the particular regime where the number of available processing nodes is greater than the total number of dot products, Short-Dot has lower expected computation time under straggling under an exponential model compared to existing strategies.

Private Edge Computing for Linear Inference Based on Secret Sharing

An edge computing scenario where users want to perform a linear computation on local, private data and a network-wide, public matrix is considered, and a scheme that guarantees information-theoretic user data privacy against an eavesdropper with access to a number of edge servers or their corresponding communication links is provided.

Speeding Up Distributed Machine Learning Using Codes

This paper focuses on two of the most basic building blocks of distributed learning algorithms: matrix multiplication and data shuffling, and uses codes to reduce communication bottlenecks, exploiting the excess in storage.

Minimizing Latency for Secure Coded Computing Using Secret Sharing via Staircase Codes

A solution based on new codes, called Staircase codes, which universally achieve the information theoretic limit on the download cost by the Master, leading to latency reduction, and is validated with extensive implementation on Amazon EC2.

Lagrange Coded Computing: Optimal Design for Resiliency, Security and Privacy

LCC is proved the optimality of LCC by showing that it achieves the optimal tradeoff between resiliency, security, and privacy, and speeds up the conventional uncoded implementation of distributed least-squares linear regression by up to up to $13.43\times, and achieves a speedup over the state-of-the-art straggler mitigation strategies.

A Unified Coding Framework for Distributed Computing with Straggling Servers

An information-theoretic lower bound on the latency- load tradeoff is proved, which is shown to be within a constant multiplicative gap from the achieved tradeoff at the two end points.

PRAC: private and rateless adaptive coded computation at the edge

A private and rateless adaptive coded computation (PRAC) algorithm is developed by taking into account the privacy requirements of IoBT applications and devices, and the heterogeneous and time-varying resources of edge devices, showing that PRAC outperforms known secure coded computing methods when resources are heterogeneous.

Exploiting Computation Replication for Mobile Edge Computing: A Fundamental Computation-Communication Tradeoff Study

This paper exploits the idea of computation replication in MEC networks to speed up the downloading phase and characterize asymptotically an order-optimal upload-download communication latency pair for a given computation load in a multi-user multi-server MEC network.

Rateless Codes for Near-Perfect Load Balancing in Distributed Matrix-Vector Multiplication

This paper proposes a rateless fountain coding strategy that achieves the best of both worlds -- it is proved that its latency is asymptotically equal to ideal load balancing, and it performs asymPTotically zero redundant computations.

Coded Computing and Cooperative Transmission for Wireless Distributed Matrix Multiplication

This paper aims to investigate the interplay among upload, computation, and download latencies during the offloading process in the high signal-to-noise ratio regime from an information-theoretic perspective and proposes a policy based on cascaded coded computing and coordinated and cooperative interference management in uplink and downlink.