A partitioned on-chip virtual cache for fast processors

Abstract

In this paper,wepropose anewvirtual cache architecture that reducesmemory latency by encompassing the merits of both direct-mapped cache and set-associative cache. The entire cache memory is divided into n banks, and the operating system assigns one of the banks to a process when it is created. Then, each process runs on the assigned bank, and the bank behaves like in a direct-mapped cache. If a cache miss occurs in the active home bank, then the data will be fetched either from other banks or from the main memory like a set-associative cache. A victim for cache replacement is selected from those that belong to a process which is most remote from being scheduled. Trace-driven simulations confirm that the new scheme removes almost asmany conflict misses as does the set-associative cache, while cache access time is similar to a direct-mapped cache.

DOI: 10.1016/S1383-7621(96)00123-3

Extracted Key Phrases

2 Figures and Tables

Cite this paper

@article{Kim1997APO, title={A partitioned on-chip virtual cache for fast processors}, author={Dongwook Kim and Joonwon Lee and Seungkyu Park}, journal={Journal of Systems Architecture}, year={1997}, volume={43}, pages={519-531} }