Competitive Algorithms for Block-Aware Caching

@article{Coester2022CompetitiveAF,
  title={Competitive Algorithms for Block-Aware Caching},
  author={Christian Coester and Roie Levin and Joseph Naor and Ohad Talmon},
  journal={Proceedings of the 34th ACM Symposium on Parallelism in Algorithms and Architectures},
  year={2022}
}
Motivated by the design of real system storage hierarchies, we study the block-aware caching problem, a generalization of classic caching in which fetching (or evicting) pages from the same block incurs the same cost as fetching (or evicting) just one page from the block. Given a cache of size k, and a sequence of requests from n pages partitioned into given blocks of size β ≤ k, the goal is to minimize the total cost of fetching to (or evicting from) cache. This problem captures generalized… 

Figures from this paper

References

SHOWING 1-10 OF 36 REFERENCES
Randomized competitive algorithms for generalized caching
TLDR
The techniques provide a unified framework for caching algorithms and are substantially simpler than those previously used, based on an extension of the primal-dual framework for online algorithms which was developed by Buchbinder and Naor.
Elastic Caching
TLDR
The algorithms are based on a configuration LP formulation of the problem, for which the main technical contribution is to maintain online a feasible fractional solution that can be converted to an integer solution using existing rounding techniques.
Page replacement for general caching problems
TLDR
This paper seeks to develop good oflline page replacement policies for the general caching problem, with the hope that any insight gained here may lead to good online algorithms.
Writeback-Aware Caching
TLDR
This work presents a deterministic replacement policy called Writeback-Aware Landlord and shows that it obtains the optimal competitive ratio, and performs an experimental study on real-world traces showing that Write back-Aaware Landlord outperforms state-ofthe-art cache replacement policies when writebacks are costly, thereby illustrating the practical gains of explicitly accounting for writebacks.
On-line caching as cache size varies
TLDR
It is shown that when h < k the competitiveness of the marking algorithm, arandomized paging strategy, is no more than 2(ln ~ –lnln ~+~) when & > e, and at most 2 otherwise, and it is shown this is roughly within a factor of two of optimal.
Online Generalized Caching with Varying Weights and Costs
We present a new extension of the generalized caching/paging problem that allows the adversary to arbitrarily change the cost or weight of the currently requested page. We present modifications of
Efficient Online Weighted Multi-Level Paging
TLDR
The writeback-aware caching problem is studied, a variant of classic paging where paging requests that modify data and requests that leave data intact are treated differently, and an O(łog^2 k) competitive randomized algorithm is given, answering an open question about the existence of a randomized poly-logarithmic competitive algorithm.
Page replacement with multi-size pages and applications to Web caching
  • S. Irani
  • Computer Science, Mathematics
    STOC '97
  • 1997
TLDR
The paging problem where the pages have varying size has applications to page replacement policies for caches containing World Wide Web documents and randomized online algorithms for both cost models that are O log k competitive are shown.
On-Line File Caching
TLDR
A simple deterministic on-line algorithm that generalizes many well-known paging and weighted-caching strategies, including least-recently-used, first-in-first-out, flush-when-full, and the balance algorithm is given.
Learning Relaxed Belady for Content Distribution Network Caching
TLDR
A new approach for caching in CDNs that uses machine learning to approximate the Belady MIN (oracle) algorithm, using the concept of Belady boundary, and proposes a metric called good decision ratio to help us make better design decisions.
...
...