Counterintuitive Characteristics of Optimal Distributed LRU Caching Over Unreliable Channels

@article{Quan2019CounterintuitiveCO,
  title={Counterintuitive Characteristics of Optimal Distributed LRU Caching Over Unreliable Channels},
  author={Guocong Quan and Jian Tan and Atilla Eryilmaz},
  journal={IEEE INFOCOM 2019 - IEEE Conference on Computer Communications},
  year={2019},
  pages={694-702}
}
Least-recently-used (LRU) caching and its variants have conventionally been used as a fundamental and critical method to ensure fast and efficient data access in computer and communication systems. Emerging data-intensive applications over unreliable channels, e.g., mobile edge computing and wireless content delivery networks, have imposed new challenges in optimizing LRU caching systems in environments prone to failures. Most existing studies focus on reliable channels, e.g., on wired Web… 

Figures from this paper

Counterintuitive Characteristics of Optimal Distributed LRU Caching Over Unreliable Channels
TLDR
It is proved that splitting the total cache space into separate LRU caches can achieve a lower asymptotic miss probability than organizing the total space in a single LRU cache, and an interesting phenomenon is discovered: allocating the cache space unequally can achieved a better performance, even when channel reliability levels are equal.
A New Flexible Multi-flow LRU Cache Management Paradigm for Minimizing Misses
TLDR
It is proved that I-PLRU outperforms PLRU and achieves the same miss probability as the optimal SLRU for stationary workload, and an equivalence mapping is utilized to efficiently find the optimal I-plRU configuration.
A New Flexible Multi-flow LRU Cache Management Paradigm for Minimizing Misses
TLDR
I-PLRU is proposed, a new insertion based pooled LRU paradigm where the data flows can be inserted at different positions of a pooled cache, that outperforms PLRU and achieves the same miss probability as the optimal SLRU under a stationary request arrival process.
Caching Policies over Unreliable Channels
TLDR
Analytical results show that joint use of the two policies outperforms LRU, while LFU outperforms all these policies whenever resource pooling is not optimal, and empirical results with larger caches show that simple alternative policies, such as LFU, provide superior performance compared to LRU even if the space allocation is not fine tuned.
Joint optimization of cache placement and request routing in unreliable networks
Joint Service Scheduling and Content Caching Over Unreliable Channels
TLDR
This paper proposes a maximal reward priority (MRP) policy to serve user requests, and a collaborative multi-agent actor critic (CMA-AC) Policy to update the local cache, and results show that the proposed MRP policy outperforms the shortest distance priority (SDP) policy.
Latency-Redundancy Tradeoff in Distributed Read-Write Systems
TLDR
This work quantifies this tradeoff between read and write latency as a function of redundancy, and provides a closed-form approximation when the request arrival is Poisson and the service is memoryless under prioritized reads, and empirically shows that this approximation is tight across all ranges of system parameters.
Cooperative Edge Caching in Small Cell Networks with Heterogeneous Channel Qualities
TLDR
A bayes-based learning algorithm is proposed that learns the popularity profile by sampling from a Beta distribution at each time period and then optimize the content placement by caching contents with higher popularity/size ratio in SBSs with better channel qualities.

References

SHOWING 1-10 OF 33 REFERENCES
Counterintuitive Characteristics of Optimal Distributed LRU Caching Over Unreliable Channels
TLDR
It is proved that splitting the total cache space into separate LRU caches can achieve a lower asymptotic miss probability than organizing the total space in a single LRU cache, and an interesting phenomenon is discovered: allocating the cache space unequally can achieved a better performance, even when channel reliability levels are equal.
DR-Cache: Distributed Resilient Caching with Latency Guarantees
  • Jian Li, T. K. Phan, M. Rio
  • Computer Science
    IEEE INFOCOM 2018 - IEEE Conference on Computer Communications
  • 2018
TLDR
A distributed resilient caching algorithm (DR-Cache) that is simple and adaptive to network failures is proposed and it is shown numerically that DR-Cache significantly outperforms other candidate algorithms under synthetic requests, as well as real world traces over a class of network topologies.
On Resource Pooling and Separation for LRU Caching
TLDR
This paper characterize the performance of multiple flows of data item requests under resource pooling and separation for LRU caching when the cache size is large and derives the asymptotic miss probabilities ofmultiple flows of requests with varying data item sizes in a shared LRU cache space.
Asymptotic Miss Ratio of LRU Caching with Consistent Hashing
TLDR
The asymptotic miss ratio of data item requests on a LRU cluster with consistent hashing is derived and it is shown that these individual cache spaces on different servers can be effectively viewed as if they could be pooled together to form a single virtual LRU cache space parametrized by an appropriate cache size.
LRU Caching with Moderately Heavy Request Distributions
TLDR
The main result of this paper shows that the ratio between the cache fault probabilities of the LRU heuristic and the optimal static algorithm is, for large caches, equal to eγ a 1:78, where γ is Euler's constant.
LRU Caching with Dependent Competing Requests
TLDR
This work derives the asymptotic miss ratios of multiple flows for a large class of truncated heavy-tailed data item popularity distributions with time dependency and significantly improves the accuracy in numerical computations when the index of a Zipf's distribution is close to one.
Adaptive Caching Networks With Optimality Guarantees
TLDR
This work proposes a distributed, adaptive algorithm that performs stochastic gradient ascent on a concave relaxation of the expected caching gain, and constructs a probabilistic content placement within a $1-1/e$ factor from the optimal, in expectation.
Consistent hashing and random trees: distributed caching protocols for relieving hot spots on the World Wide Web
TLDR
A family of caching protocols for distrib-uted networks that can be used to decrease or eliminate the occurrence of hot spots in the network, based on a special kind of hashing that is called consistent hashing.
AdaptSize: Orchestrating the Hot Object Memory Cache in a Content Delivery Network
TLDR
The proposed AdaptSize is the first adaptive, size-aware cache admission policy for HOCs that achieves a high OHR, even when object size distributions and request characteristics vary significantly over time, and is more robust to changing request patterns than the traditional tuning approach.
On the complexity of optimal routing and content caching in heterogeneous networks
TLDR
This work investigates the problem of optimal request routing and content caching in a heterogeneous network supporting in-network content caching with the goal of minimizing average content access delay, and proves that under the congestion-insensitive model the problem can be solved optimally in polynomial time.
...
1
2
3
4
...