Skip to search form
Skip to main content
Skip to account menu
Semantic Scholar
Semantic Scholar's Logo
Search 225,220,488 papers from all fields of science
Search
Sign In
Create Free Account
Runahead
Runahead is a technique that allows a microprocessor to pre-process instructions during cache miss cycles instead of stalling. The pre-processed…
Expand
Wikipedia
(opens in a new tab)
Create Alert
Alert
Related topics
Related topics
14 relations
Application checkpointing
Branch predictor
CPU cache
Computer data storage
Expand
Papers overview
Semantic Scholar uses AI to extract papers important to this topic.
2020
2020
Semantic prefetching using forecast slices
L. Peled
,
U. Weiser
,
Yoav Etsion
arXiv.org
2020
Corpus ID: 218613915
Modern prefetchers identify memory access patterns in order to predict future accesses. However, many applications exhibit…
Expand
2016
2016
The runahead network-on-chip
Zimo Li
,
Joshua San Miguel
,
Natalie D. Enright Jerger
International Symposium on High-Performance…
2016
Corpus ID: 3237453
With increasing core counts and higher memory demands from applications, it is imperative that networks-on-chip (NoCs) provide…
Expand
2012
2012
NCOR: An FPGA-Friendly Nonblocking Data Cache for Soft Processors with Runahead Execution
Kaveh Aasaraai
,
Andreas Moshovos
International Journal of Reconfigurable Computing
2012
Corpus ID: 7946090
Soft processors often use data caches to reduce the gap between processor and main memory speeds. To achieve high efficiency…
Expand
2010
2010
Efficient Runahead Threads
Tanausú Ramírez
,
Alex Pajuelo
,
O. J. Santana
,
O. Mutlu
,
M. Valero
International Conference on Parallel…
2010
Corpus ID: 14345539
Runahead Threads (RaT) is a promising solution that enables a thread to speculatively run ahead and prefetch data instead of…
Expand
2010
2010
Runahead execution vs. conventional data prefetching in the IBM POWER6 microprocessor
Harold W. Cain
,
P. Nagpurkar
IEEE International Symposium on Performance…
2010
Corpus ID: 10160759
After many years of prefetching research, most commercially available systems support only two types of prefetching: software…
Expand
2010
2010
An Efficient Non-blocking Data Cache for Soft Processors
Kaveh Aasaraai
,
Andreas Moshovos
International Conference on Reconfigurable…
2010
Corpus ID: 7596833
Soft processors often use data caches to reduce the gap between processor and main memory speeds. To achieve high efficiency…
Expand
2009
2009
Combining thread level speculation helper threads and runahead execution
Polychronis Xekalakis
,
Nikolas Ioannou
,
Marcelo H. Cintra
International Conference on Supercomputing
2009
Corpus ID: 222675
With the current trend toward multicore architectures, improved execution performance can no longer be obtained via traditional…
Expand
2006
2006
Efficient runahead execution processors
Y. Patt
,
O. Mutlu
2006
Corpus ID: 6456341
High-performance processors tolerate latency using out-of-order execution. Unfortunately, today's processors are facing memory…
Expand
2006
2006
MLP-Aware Cache Replacement A Case for MLP-Aware Cache Replacement
Moinuddin K. Qureshi
,
Daniel N. Lynch
,
O. Mutlu
2006
Corpus ID: 13426376
Performance loss due to long-latency memory accesses can be reduced by servicing multiple memory accesses concurrently. The…
Expand
2001
2001
Inexpensive throughput enhancement in small-scale embedded microprocessors with block multithreading: extensions, characterization, and tradeoffs
J. Haskins
,
R. Kevin
,
K. Hirst
,
Skadron
Conference Proceedings of the IEEE International…
2001
Corpus ID: 18063142
This paper examines differential multithreading (DMT) as an attractive organization for coping with pipeline stalls in small…
Expand
By clicking accept or continuing to use the site, you agree to the terms outlined in our
Privacy Policy
(opens in a new tab)
,
Terms of Service
(opens in a new tab)
, and
Dataset License
(opens in a new tab)
ACCEPT & CONTINUE