Corpus ID: 8053056

Performance Analysis of On-Chip Cache and Main Memory Compression Systems for High-End Parallel Computers

@inproceedings{YIM2004PerformanceAO,
  title={Performance Analysis of On-Chip Cache and Main Memory Compression Systems for High-End Parallel Computers},
  author={Keun Soo YIM and Jihong Kim and K. Koh},
  booktitle={PDPTA},
  year={2004}
}
Cache and memory compression systems have been developed for improving memory system performance of high-performance parallel computers. Cache compression systems can reduce onchip cache miss rate and off-chip memory traffic by storing and transferring cache lines in compressed form, while memory compression systems can expand main memory capacity by storing memory pages in compressed form. However, these systems have not been quantitatively evaluated on an identical condition, making it… Expand
A Space-Efficient On-Chip Compressed Cache Organization for High Performance Computing
TLDR
This paper presents the fine-grained compressed cache line management which addresses the fragmentation problem, while avoiding an increase in the metadata size such as tag field and VM page table. Expand
C-Pack: A High-Performance Microprocessor Cache Compression Algorithm
TLDR
This work presents a lossless compression algorithm that has been designed for fast on-line data compression, and cache compression in particular, and reduces the proposed algorithm to a register transfer level hardware design, permitting performance, power consumption, and area estimation. Expand
AN EFFICIENT ALGORITHM FOR A CACHE COMPRESSION AND DECOMPRESSION TO IMPROVE SYSTEM MEMORY PERFORMANCE
1 Abstract—Speed is the challenging issue for any electronic component. Memory access time is dependent on speed of the microprocessor. Access time is more in the off-chip memory than on-chipExpand
A Space-Efficient Virtual Memory Organization for On-Chip Compressed Caches
On-chip compressed cache systems have been recently developed that reduce the cache miss count and off-chip memory traffic by storing and transferring cache line in a compressed form. In order toExpand
Transparent memory hierarchy compression and migration
TLDR
This dissertation presents several new operating system and architecture techniques that use elements of the virtual and physical memory system to improve the functionality, power consumption, and performance of embedded systems such as multimedia devices and wireless sensor network nodes. Expand
A NOVEL APPROACH FOR A HIGH PERFORMANCE LOSSLESS CACHE COMPRESSION ALGORITHM
TLDR
This work proposes a lossless compression algorithm that has been designed for high performance, fast on-line data compression and particularly for cache compression, including combining pairs of compressed lines into one cache line and allowing parallel compression of multiple words while using a single dictionary. Expand
Improving disk bandwidth-bound applications through main memory compression
TLDR
This paper implements and evaluates in the Linux OS a full SMP capable main memory compression subsystem that takes advantage of a current multicore and multiprocessor system to increase the performance of bandwidth sensitive applications like the SPECweb2005 benchmark with promising results. Expand
Accelerating software memory compression on the Cell/B.E.
TLDR
A software memory compression system for the Linux kernel is proposed and implemented, that offload the CPU–intensive compression task to the specialized processor units present in the Cell/B.E. Expand
Design and Implementation of a High-Performance Microprocessor Cache Compression Algorithm
TLDR
This work presents a lossless compression algorithm that has been designed for on-line memory hierarchy compression, and cache compression in particular, and reduces the algorithm to a register transfer level hardware implementation, permitting performance, power consumption, and area estimation. Expand
A compression layer for NAND type flash memory systems
TLDR
This paper improved the compression layer for NandFlash, which can be coordinated with the X-RL algorithm, to avoid overhead and reduce the degree of internal fragmentation in the compressed data pages and can improve the compression rate. Expand
...
1
2
...

References

SHOWING 1-10 OF 31 REFERENCES
Hardware Compressed Main Memory: Operating System Support and Performance Evaluation
TLDR
This paper describes operating system techniques that can deal with dynamically changing memory sizes and shows that the hardware compression of memory has a negligible performance penalty compared to a standard memory for many applications and improves the performance significantly. Expand
IBM Memory Expansion Technology (MXT)
TLDR
This architecture is the first of its kind to employ real-time main-memory content compression at a performance competitive with the best the market has to offer. Expand
Frequent value compression in data caches
TLDR
The design and evaluation of the compression cache (CC) is presented which is a first level cache that has been designed so that each cache line can either hold one uncompressed line or two cache lines which have been compressed to at least half their lengths. Expand
Performance evaluation of computer architectures with main memory data compression
TLDR
This paper proposes an organisation where data and code are stored in compressed form while there is competition for memory resources, and demonstrates that system performance may be improved by up to a factor of two when using software based memory compression instead of paging. Expand
Hardware-assisted data compression for energy minimization in systems with embedded processors
TLDR
A novel and efficient architecture for on-the-fly data compression and decompression whose field of operation is the cache-to-memory path of a core-based system running standard benchmark programs is proposed. Expand
The Compression Cache: Using On-line Compression to Extend Physical Memory
TLDR
Measurements using Sprite on a DECstation 1 5000/200 workstation with a local disk indicate that some memory-intensive applications running with a compression cache can run two to three times faster than on an unmodiied system. Expand
Design and performance of a main memory hardware data compressor
  • M. Kjelsø, M. Gooch, S. Jones
  • Computer Science
  • Proceedings of EUROMICRO 96. 22nd Euromicro Conference. Beyond 2000: Hardware and Software Design Strategies
  • 1996
TLDR
It is demonstrated that paging due to insufficient memory resources can reduce system performance several fold, and it is argued that hardware memory compression can eliminate this paging hence providing a substantial performance improvement. Expand
The Case for Compressed Caching in Virtual Memory Systems
TLDR
This study shows that technology trends favor compressed virtual memory--it is attractive now, offering reduction of paging costs of several tens of percent, and it will be increasingly attractive as CPU speeds increase faster than disk speeds. Expand
Compressed caching and modern virtual memory simulation
TLDR
This dissertation begins by outlining an approach to reducing reference traces for use in simulations of virtual memory, and uses reference traces to explore compressed caching—the insertion of a new, compressed level of RAM into the memory hierarchy. Expand
A very fast algorithm for RAM compression
TLDR
It is shown that in many cases memory pages contain highly compressible data, with a very large amount of zero-valued elements, which suggests the replacement of slow, adaptive compression algorithms with very fast ones based on static Huffman codes. Expand
...
1
2
3
4
...