Learn More
TCAM-based search engines are widely used in regular expression matching across multiple packets. However, the use of priority encoder results in increased energy consumption of pattern updates and search operations. This work, proposes a promising memory technology, called Priority-Decision in Memory (PDM), which eliminates the need for priority encoders(More)
Faced with increasingly large multicore chip designs, architects need fast and accurate simulations for their exploration of design spaces within a limited simulation time budget. In multithreaded applications, threads cannot run simultaneously. Sampling is commonly used to reduce simulation time, but conventional sampling barely detects the instantaneous(More)
To alleviate high energy dissipation of unnecessary snooping accesses, snoop filters have been designed to reduce snoop lookups. These filters have the problem of decreasing filtering efficiency, and thus usually rely on partial or whole filter reset by detecting block evictions. Unfortunately, the reset conditions occur infrequently or unevenly (called(More)
The traditional LRU replacement policy is susceptible to memory-intensive workloads with large non-reused data like thrashing applications and scan applications. For such workloads, the majority of cache blocks don't get any cache hits during residing in the cache. Cache performance can be improved by reducing the interference from non-reused data.(More)
For the last decade, there have been varying techniques for hardware prefetching to improve the system performance. However, untimely prefetching may pollution caches and resulting into significant performance degradation. In this work, we introduce an Adaptive Granularity and coordinated Prefetching (AGP) that consists of a coarse-grained and fine-grained(More)
  • 1