TinyLFU: A Highly Efficient Cache Admission Policy

  title={TinyLFU: A Highly Efficient Cache Admission Policy},
  author={Gil Einziger and Roy Friedman},
  journal={2014 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing},
  • Gil EinzigerR. Friedman
  • Published 12 February 2014
  • Computer Science
  • 2014 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing
This paper proposes to use a frequency based cache admission policy in order to boost the effectiveness of caches subject to skewed access distributions. Rather than deciding on which object to evict, TinyLFU decides, based on the recent access history, whether it is worth admitting an accessed object into the cache at the expense of the eviction candidate. Realizing this concept is enabled through a novel approximate LFU structure called TinyLFU, which maintains an approximate representation… 

TinyCache - An Effective Cache Admission Filter

TinyCache is introduced, a compact table based management policy for datastore caches that achieves similar hit ratio compared to the leading alternatives while operating in worst case constant time and only accessing a fixed sized memory word for each update.

Achieving high cache hit ratios for CDN memory caches with size-aware admission

This work proposes two policies that either admit an object with some probability proportional to its size, or as a simple size threshold, and finds that the superior performance of these policies stems from a new statistical cache tuning method, which automatically adapts the parameters of the authors' admission policies to the request traffic.

LearnedCache: A Locality-Aware Collaborative Data Caching by Learning Model

  • Wenlong MaYuqing ZhuSa WangYungang Bao
  • Computer Science
    2019 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom)
  • 2019
LearnedCache is presented, a highly efficient inmemory caching algorithm that significantly outperforms various replacement policies of Redis and Memcached for a variety of workloads and is useful for all distributed Web, file system, database and content delivery services.

Lightweight Robust Size Aware Cache Management

This work extends the prevalent (size-oblivious) TinyLFU cache admission policy to handle variable-sized items and shows that the algorithms yield competitive or better hit-ratios and byte hit-Ratios compared to the state-of-the-art size-aware algorithms such as AdaptSize, LHD, LRB, and GDSF.

CacheSack: Admission Optimization for Google Datacenter Flash Caches

Production experiments showed that CacheSack significantly outperforms the prior static admission policies for a 6.5% improvement of the total operational cost, as well as improvements in disk reads and flash wearout.

SHARC: improving adaptive replacement cache with shadow recency cache management

Experimental results indicate that SHARC outperforms the state-of-the-art policies of ARC, Low Inter-Reference Recency Set (LIRS), and Dynamic LIRS.

A Packet-Level Caching Algorithm for Mitigating Negative Effects Caused by Large Objects in ICN Networks

An analytical model of packet-level caching with cache admission is developed, and theoretically proves that cache admission mitigates these problems and improves cache hit probability.

Efficient Estimation of Read Density when Caching for Big Data Processing

  • Sacheendra TalluriA. Iosup
  • Computer Science
    IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)
  • 2019
The Read Density family of policies is proposed, which is a principled approach to quantify the utility of cached objects through a family of utility functions that depend on the frequency of reads of an object, which promises to achieve runtime-space efficient computation of the metric required by the cache policy.

Limited Associativity Caching in the Data Plane

In-network caching promises to improve the performance of networked and edge applications as it shortens the paths data need to travel. This is by storing so-called hot items in the network switches

A New Flexible Multi-flow LRU Cache Management Paradigm for Minimizing Misses

I-PLRU is proposed, a new insertion based pooled LRU paradigm where the data flows can be inserted at different positions of a pooled cache, that outperforms PLRU and achieves the same miss probability as the optimal SLRU under a stationary request arrival process.



ARC: A Self-Tuning, Low Overhead Replacement Cache

The problem of cache management in a demand paging scenario with uniform page sizes is considered and a new cache management policy, namely, Adaptive Replacement Cache (ARC), is proposed that has several advantages.

Efficient randomized web-cache replacement schemes using samples from past eviction times

Interestingly, it is found that retaining a small number of samples from one iteration to the next leads to an exponential improvement in performance as compared to retaining no samples at all.

Exploitation of different types of locality for Web caches

It is argued that there exist cache replacement algorithms that combine these characteristics and achieve high performance at a low cost and the Window-LFU is described, a policy that combines LFU and LRU and achieves better performance than LFU at lower cost.

Replacement Policies for a Distributed Object Caching Service

This paper investigates replacement policies for an object caching service, and examines the behavior of the entire system, rather than looking at a single cache at a time.

Effective caching of Web objects using Zipf's law

  • D. SerpanosGeorge KarakostasM. Wolf
  • Computer Science
    2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532)
  • 2000
This paper provides an analysis using Chernoff's bound and a calculation of an upper bound of the number of initial requests that need to be processed in order to obtain measurements of popularity with high confidence and a measured Zipf distribution which converges to the correct one.

LIRS: an efficient low inter-reference recency set replacement policy to improve buffer cache performance

LIRS effectively addresses the limits of LRU by using recency to evaluate Inter-Reference Recency (IRR) for making a replacement decision, and significantly outperforms LRU, and outperforms other existing replacement algorithms in most cases.

Performance evaluation of Web proxy cache replacement policies

CAR: Clock with Adaptive Replacement

A simple and elegant new algorithm, namely, CLOCK with Adaptive Replacement (CAR), that has several advantages over CLOCK: it is scan-resistant, self-tuning and it adaptively and dynamically captures the "recency" and "frequency" features of a workload.

Adaptive Web Proxy Caching Algorithms

This paper analyzes the distribution of current web content and re-evaluates various proxy cache replacement algorithms including LFU, LRU and several GreedyDual variants and proposes two new web caching algorithms: a local policy that maintains a list of popular URLs and a global policy that partitions the cache into distinct regions.

Data caching as a cloud service

The challenges of devising a useful shared data cache service as a part of the cloud platform are discussed, thus allowing result-based caching to be seamlessly integrated with existing database-driven code.