• Corpus ID: 15334408

A Dynamically Partitionable Compressed Cache

@inproceedings{Chen2003ADP,
  title={A Dynamically Partitionable Compressed Cache},
  author={David Chen and Enoch Peserico and L. Rudolph},
  year={2003}
}
The effective size of an L2 cache can be increased by using a dictionary-based compression scheme. Naive application of this idea performs poorly since the data values in a cache greatly vary in their “compressibility.” The novelty of this paper is a scheme that dynamically partitions the cache into sections of different compressibilities. While compression is often researched in the context of a large stream, in this work it is applied repeatedly on smaller cache-line sized blocks so as to… 

Figures and Tables from this paper

Adaptive cache compression for high-performance processors
  • A. Alameldeen, D. Wood
  • Computer Science
    Proceedings. 31st Annual International Symposium on Computer Architecture, 2004.
  • 2004
TLDR
An adaptive policy that dynamically adapts to the costs and benefits of cache compression is developed and it is shown that compression can improve performance for memory-intensive commercial workloads by up to 17%.
Restrictive compression techniques to increase level 1 cache capacity
TLDR
The techniques in this paper increase the average L1 data cache capacity by about 50%, compared to the conventional cache, with no or minimal impact on the cache access time.
Cache Compression through Noise Prediction
  • Computer Science
  • 2006
TLDR
This paper predicts and fetch only the to-bereferenced data into the L1 data cache on a cache miss, and utilizes the cache space to store words from multiple cache blocks in a single physical cache block in the cache (the authors call this technique invalidation based cache compression).
Dynamic Dictionary-Based Data Compression for Level-1 Caches
TLDR
This paper proposes the first dynamic dictionary-based compression mechanism for L1 data caches, which solves the problem of keeping the compressed contents of the cache and the dictionary entries consistent, using a timekeeping decay technique.
Dynamic Cache Compression Technique in Chip Multiprocessors
TLDR
An dynamic compression policy is developed which dynamically adapts the cost and benefit of cache compression, and block will be compressed only when compression is beneficial, otherwiseBlock will be uncompressed in this paper.
Frequent Pattern Compression: A Significance-Based Compression Scheme for L2 Caches
TLDR
This work proposes and evaluates a simple significance-based compression scheme that has a low compression and decompression overhead and provides comparable compression ratios to more complex schemes that have higher cache hit latencies.
A Unified Compressed Cache Hierarchy Using Simple Frequent Pattern Compression and Partial Cache Line Prefetching
TLDR
A novel compressed cache hierarchy that uses a unified compression algorithm in both L1 data cache and L2 cache, called Simple Frequent Pattern Compression (S-FPC), which increases the average L1 cache capacity, reduces the data cache miss rate, and speeds up program execution by 13%.
Increasing cache capacity through word filtering
TLDR
This paper uses a prediction mechanism to fetch only the to-be-referenced data into the L1 data cache on a cache miss, and utilizes the cache space, thus made available, to store words from multiple cache blocks in a single physical cache block space in the cache, thus increasing the useful words in the caches.
Compaction-free compressed cache for high performance multi-core system
TLDR
This paper proposes a compaction-free compressed cache architecture which can completely eliminate the time for executing compaction, and demonstrates that the results, compared with the conventional cache, have system performance improvement by 16% and energy reduction by 16%.
Last-level cache deduplication
TLDR
This work proposes cache deduplication that effectively increases last- level cache capacity and detects duplicate data blocks and stores only one copy of the data in a way that can be accessed through multiple physical addresses.
...
1
2
3
...

References

SHOWING 1-10 OF 28 REFERENCES
Frequent value compression in data caches
  • Jun Yang, Youtao Zhang, R. Gupta
  • Computer Science
    Proceedings 33rd Annual IEEE/ACM International Symposium on Microarchitecture. MICRO-33 2000
  • 2000
TLDR
The design and evaluation of the compression cache (CC) is presented which is a first level cache that has been designed so that each cache line can either hold one uncompressed line or two cache lines which have been compressed to at least half their lengths.
The Case for Compressed Caching in Virtual Memory Systems
TLDR
This study shows that technology trends favor compressed virtual memory--it is attractive now, offering reduction of paging costs of several tens of percent, and it will be increasingly attractive as CPU speeds increase faster than disk speeds.
Cache-Memory Interfaces in Compressed Memory Systems
TLDR
A number of cache/memory hierarchy design issues in systems with compressed random access memories (C-RAMs) in which compression and decompression occur automatically to and from main memory are considered, using trace-driven analysis to evaluate alternatives.
The Compression Cache: Using On-line Compression to Extend Physical Memory
TLDR
Measurements using Sprite on a DECstation 1 5000/200 workstation with a local disk indicate that some memory-intensive applications running with a compression cache can run two to three times faster than on an unmodiied system.
On-line data compression in a log-structured file system
TLDR
Hardware compression devices mesh well with on-line data compression into the low levels of a log-structured file system (Rosenblum’s Sprite LFS), which indicates that hardware compression would not only remove the performance degradation, but might well increase the effective disk transfer rate beyond that obtainable from a system without compression.
Creating a wider bus using caching techniques
  • D. Citron, L. Rudolph
  • Computer Science
    Proceedings of 1995 1st IEEE Symposium on High Performance Computer Architecture
  • 1995
TLDR
Simulations have shown that over 90% of all informative transferred can be sent in a single cycle when using a 32 bit processor connected by a 16 bit wide bus to a 32 bits memory module.
IBM Memory Expansion Technology (MXT)
TLDR
This architecture is the first of its kind to employ real-time main-memory content compression at a performance competitive with the best the market has to offer.
Frequent Value Locality and Value-Centric Data Cache Design
TLDR
A new data cache structure, the frequent value cache (FVC), is proposed, which employs a value-centric approach to caching data locations for exploiting the frequentvalue locality phenomenon.
Extending the reach of microprocessors: column and curious caching
TLDR
This thesis motivates column and curious caching by high-performance communciation, evaluates these adaptive mechanisms for communication and other uses and proposes various implementations designed for different constraints, demonstrating how these simple mechanisms can enable substantial performance improvements and support a wide range of additional functionality.
Design and performance of a main memory hardware data compressor
  • M. Kjelsø, M. Gooch, S. Jones
  • Computer Science
    Proceedings of EUROMICRO 96. 22nd Euromicro Conference. Beyond 2000: Hardware and Software Design Strategies
  • 1996
TLDR
It is demonstrated that paging due to insufficient memory resources can reduce system performance several fold, and it is argued that hardware memory compression can eliminate this paging hence providing a substantial performance improvement.
...
1
2
3
...