Decentralized coded caching attains order-optimal memory-rate tradeoff

@article{MaddahAli2013DecentralizedCC,
  title={Decentralized coded caching attains order-optimal memory-rate tradeoff},
  author={Mohammad Ali Maddah-Ali and Urs Niesen},
  journal={2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton)},
  year={2013},
  pages={421-427}
}
  • M. Maddah-Ali, Urs Niesen
  • Published 24 January 2013
  • Computer Science
  • 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton)
Replicating or caching popular content in memories distributed across the network is a technique to reduce peak network loads. [] Key Method In other words, no coordination is required for the content placement. Despite this lack of coordination, the proposed scheme is nevertheless able to create simultaneous coded-multicasting opportunities, and hence achieves a rate close to the centralized scheme.

Figures from this paper

Optimal decentralized coded caching for heterogeneous files
TLDR
A novel optimization strategy for coded caching is proposed that minimizes the worst-case transmission rate of multicasting the coded content upon users requests, subject to the storage constraint at the local caches, by the optimal allocation of the caching proportion among heterogeneous files.
Hierarchical Coded Caching
TLDR
A new caching scheme that combines two basic approaches to provide coded multicasting opportunities within each layer and across multiple layers is proposed, which achieves the optimal communication rates to within a constant multiplicative and additive gap.
Fundamental limits of caching
TLDR
This paper proposes a novel caching approach that can achieve a significantly larger reduction in peak rate compared to previously known caching schemes, and argues that the performance of the proposed scheme is within a constant factor from the information-theoretic optimum for all values of the problem parameters.
An Efficient Fair Content Delivery Scheme for Coded Caching
TLDR
This work proposes a low-complexity gradient- based scheduling that exploits multicast opportunities offered by coded caching, while keeping a number of multicast groups linear in the number of users.
Efficient Algorithms for Coded Multicasting in Heterogeneous Caching Networks
TLDR
This paper extends the asymptotic analysis of shared link caching networks to heterogeneous network settings, and presents novel coded multicast schemes, based on local graph coloring, that exhibit polynomial-time complexity in all the system parameters, while preserving theAsymptotically proven multiplicative caching gain even for finite file packetization.
Hierarchical coded caching
TLDR
A new caching scheme that combines two basic approaches is proposed that achieves the optimal communication rates to within a constant multiplicative and additive gap and shows that there is no tension between the rates in each of the two layers up to the aforementioned gap.
Decentralized Caching and Coded Delivery With Distinct Cache Capacities
TLDR
A group-based decentralized caching and coded delivery scheme is proposed, and it is shown to improve upon the state of the art in terms of the minimum required delivery rate when there are more users in the system than files.
Correlation-aware distributed caching and coded delivery
TLDR
It is shown how joint file compression during the caching and delivery phases can provide load reductions that go beyond those achieved with existing schemes, through a lower bound on the fundamental rate-memory trade-off and a correlation-aware achievable scheme.
Updating Content in Cache-Aided Coded Multicast
TLDR
This work presents a novel scheme that shows how the caches can be advantageously used to decrease the overall cost of multicast, even though the source encodes without access to past data.
On Caching with More Users than Files
TLDR
The proposed delivery method is proved to be optimal under the constraint of uncoded placement for centralized systems with two files; moreover it is shown to outperform known caching strategies for both centralized and decentralized systems.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 27 REFERENCES
Decentralized Caching Attains Order-Optimal Memory-Rate Tradeoff
TLDR
This paper proposes an efficient caching scheme, in which the content placement is performed in a dece ntralized manner, and hence achieves a rate close to the centralized scheme.
Fundamental limits of caching
TLDR
This paper proposes a novel caching approach that can achieve a significantly larger reduction in peak rate compared to previously known caching schemes, and argues that the performance of the proposed scheme is within a constant factor from the information-theoretic optimum for all values of the problem parameters.
Hierarchical coded caching
TLDR
A new caching scheme that combines two basic approaches is proposed that achieves the optimal communication rates to within a constant multiplicative and additive gap and shows that there is no tension between the rates in each of the two layers up to the aforementioned gap.
Distributed Caching Algorithms for Content Distribution Networks
TLDR
This paper develops light-weight cooperative cache management algorithms aimed at maximizing the traffic volume served from cache and minimizing the bandwidth cost, and establishes that the performance of the proposed algorithms is guaranteed to be within a constant factor from the globally optimal performance.
Placement Algorithms for Hierarchical Cooperative Caching
TLDR
The main result is a simple constant-factor approximation algorithm for the hierarchical placement problem that admits an efficient distributed implementation and does not appear to be practical for large problem sizes.
Web caching using access statistics
TLDR
This work considers the problem of caching web pages with the objective of minimizing latency of access, and presents a constant factor approximation to the optimum average latency while exceeding capacity constraints by a logarithmic factor.
Dynamic batching policies for an on-demand video server
TLDR
It is shown that a first come, first served (FCFS) policy that schedules the video with the longest outstanding request can perform better than the maximum queue length (MQL) policy, and multicasting is better exploited by scheduling playback of the most popular videos at predetermined, regular intervals (hence, termed FCFS-n).
Coding on demand by an informed source (ISCOD) for efficient broadcast of different supplemental data to caching clients
  • Y. Birk, T. Kol
  • Computer Science
    IEEE Transactions on Information Theory
  • 2006
TLDR
The Informed-Source Coding On Demand approach for efficiently supplying nonidentical data from a central server to multiple caching clients over a broadcast channel is presented and k-partial cliques in a directed graph are defined and cast ISCOD in terms of partial-clique covers.
The Use of Multicast Delivery to Provide a Scalable and Interactive Video-on-Demand Service
TLDR
This work considers a VoD system that uses multicast delivery to service multiple customers with a single set of resources and describes a framework and mechanisms by which such interactive functions can be incorporated into a multicasts delivery VoD System.
Network information flow
TLDR
This work reveals that it is in general not optimal to regard the information to be multicast as a "fluid" which can simply be routed or replicated, and by employing coding at the nodes, which the work refers to as network coding, bandwidth can in general be saved.
...
1
2
3
...