ChargeCache: Reducing DRAM latency by exploiting row access locality

Abstract

DRAM latency continues to be a critical bottleneck for system performance. In this work, we develop a low-cost mechanism, called ChargeCache, that enables faster access to recently-accessed rows in DRAM, with no modifications to DRAM chips. Our mechanism is based on the key observation that a recently-accessed row has more charge and thus the following access to the same row can be performed faster. To exploit this observation, we propose to track the addresses of recently-accessed rows in a table in the memory controller. If a later DRAM request hits in that table, the memory controller uses lower timing parameters, leading to reduced DRAM latency. Row addresses are removed from the table after a specified duration to ensure rows that have leaked too much charge are not accessed with lower latency. We evaluate ChargeCache on a wide variety of workloads and show that it provides significant performance and energy benefits for both single-core and multi-core systems.

DOI: 10.1109/HPCA.2016.7446096
View Slides

Extracted Key Phrases

13 Figures and Tables

010203020162017
Citations per Year

Citation Velocity: 20

Averaging 20 citations per year over the last 2 years.

Learn more about how we calculate this metric in our FAQ.

Cite this paper

@article{Hassan2016ChargeCacheRD, title={ChargeCache: Reducing DRAM latency by exploiting row access locality}, author={Hasan Hassan and Gennady Pekhimenko and Nandita Vijaykumar and Vivek Seshadri and Donghyuk Lee and Oguz Ergin and Onur Mutlu}, journal={2016 IEEE International Symposium on High Performance Computer Architecture (HPCA)}, year={2016}, pages={581-593} }