• Corpus ID: 6259428

2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm

@inproceedings{Johnson19942QAL,
  title={2Q: A Low Overhead High Performance Buffer Management Replacement Algorithm},
  author={Theodore Johnson and Dennis Shasha},
  booktitle={VLDB},
  year={1994}
}
In a path-breaking paper last year Pat and Betty O’Neil and Gerhard Weikum pro posed a self-tuning improvement to the Least Recently Used (LRU) buffer management algorithm[l5]. Their improvement is called LRU/k and advocates giving priority to buffer pages baaed on the kth most recent access. (The standard LRU algorithm is denoted LRU/l according to this terminology.) If Pl’s kth most recent access is more more recent than P2’s, then Pl will be replaced after P2. Intuitively, LRU/k for k > 1 is… 

Figures and Tables from this paper

A low-overhead high-performance unified buffer management scheme that exploits sequential and looping references

TLDR
A Unified Buffer Management (UBM) scheme that exploits reference regularities and yet, is simple to deploy is presented that improves the hit ratios and reduces the elapsed times of the LRU scheme.

DIG: Degree of inter-reference gap for a dynamic buffer cache management

LFU-K: An Effective Buffer Management Replacement Algorithm

TLDR
The LFU-2 algorithm provides significant improvement over conventional buffering algorithms for the shared-nothing database systems and is described as a theoretical-probability model for formal description of LFU - K algorithm.

Efficient Pre-fetch and Pre-release Based Buffer Cache Management for Web Applications

TLDR
This paper proposes an improved LRU buffer cache management scheme using pre-fetching and pre-releasing based on spatial locality that is as simple as the LRU scheme and retains its characteristics.

CAR: Clock with Adaptive Replacement

TLDR
The algorithm CAR is accepted by the Adaptive Replacement Cache (ARC) algorithm, and inherits virtually all advantages of ARC including its high performance, but does not serialize cache hits behind a single global lock.

Cost-Based Buffer Management Algorithm for Flash Database Systems

TLDR
An adaptive replacement policy (CBLRU), which assigns to each page a weighted value that combines the IO cost and the influence of pages staying in the buffer, is proposed, which is very efficient when being used for buffer replacement.

Application Buffer-Cache Management for Performance: Running the World's Largest MRTG

TLDR
A method and tools to expose the readahead and buffer-cache behaviors that are otherwise hidden from the user and two approaches to overcome the bottleneck responsible for that experience are presented.

LRFU: A Spectrum of Policies that Subsumes the Least Recently Used and Least Frequently Used Policies

TLDR
Experimental results from trace-driven simulations show that the performance of the LRFU is at least competitive with that of previously known policies for the workloads the authors considered.

BP-Wrapper: A System Framework Making Any Replacement Algorithms (Almost) Lock Contention Free

TLDR
A system framework, called BP-Wrapper, that (almost) eliminates lock contention for any replacement algorithm without requiring any changes to the algorithm, and uses batching and prefetching techniques to reduce lock contention and to retain high hit ratio.

BROOM: buffer replacement using online optimization by mining

TLDR
Simulation results are presented to show that BROOM sometimes has the best hit rates, but seldom the worst, over a wide range of system conngurations and reference patterns.
...

References

SHOWING 1-10 OF 22 REFERENCES

Analysis of the generalized clock buffer replacement scheme for database transaction processing

TLDR
An approximate analysis for the GCLOCK policy under the Independent Reference Model (IRM) that applies to many database transaction processing workloads and outlines how the model can be extended to capture the effect of page invalidation in a multinode system.

The LRU-K page replacement algorithm for database disk buffering

TLDR
The LRU-K algorithm surpasses conventional buffering algorithms in discriminating between frequently and infrequently referenced pages, and adapts in real time to changing patterns of access.

Data cache management using frequency-based replacement

TLDR
A replacement algorithm based on the concept of maintaining reference counts in which locality has been “factored out” is described, which can offer up to 34% performance improvement over LRU replacement.

An approximate analysis of the LRU and FIFO buffer replacement schemes

TLDR
This paper develops approximate analytical models for predicting the buffer hit probability under the Least Recently Used (LRU) and First In First Out (FIFO) buffer replacement policies under the independent reference model and shows that if multiple independent reference streams on mutually disjoint sets of data compete for the same buffer, it is better to partition the buffer using an optimal allocation policy.

Flexible buffer allocation based on marginal gains

TLDR
A unified approach for buffer allocation in which both of these considerations are taken into account, based on the notion of marginal gains which specify the expected reduction cm page faults in allocating extra buffers to a query.

Optimal buffer allocation in a multi-query environment

TLDR
A global optimization strategy using simulated annealing is developed which minimizes the average response time over all queries under the constraint that the total memory consumption rate has to be less than the buffer size.

Amortized efficiency of list update and paging rules

TLDR
This article shows that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules, and analyzes the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule by a factor that depends on the size of fast memory.

A Study of Buffer Management Policies for Data Management Systems.

TLDR
For the application and job mix in question it turns out that anticipatory fetching does not pay, and that DS in general behaves somewhat better than LRU.

Predictive Load Control for Flexible Buffer Allocation

TLDR
Results show that the proposed IISing rnn-l.lion algorithm, to design an ndnp!nhlc hufi’er alloca algorithm that will aut,onlatically optimize itself for t,he specifir query workload, met.

Sequentiality and prefetching in database systems

TLDR
It is found that anticipatory fetching of data can lead to significant improvements in system operation and is shown how to determine optimal block sizes.