Predicting Cache Contention for Multithread Applications at Compile Time


Shared cache in multicore processors is an important hardware resource that should be utilized effectively to achieve high performance for parallel applications. It is critical to coordinate accesses by multiple threads to data that reside in shared cache to reduce cache contentions, and to improve cache hit rates of both shared and private data of threads. Poorly scheduled threads with regards to their memory access patterns can cause severe cache interference among each other, thus increasing the number of cache misses greatly and causing overall application performance degradation. To evaluate the cache contention behavior of multithread applications, we propose a new method to predict number of cache misses that would happen due to cache sharing and/or contention at compile time. The method utilizes compiler analysis techniques to generate memory access traces for each thread under no cache sharing. Then a small number of combinations of threads' memory access traces are randomly selected among all combinations as an input to predict cache misses of the threads using the proposed method. We have shown that our method can predict the number of cache misses very fast without losing accuracy. Our approach is very generic and can predict cache contention and/or sharing impact of parallel applications on both homogeneous and heterogeneous systems. We evaluated the method with a simulation case study and two scientific benchmarks. The results showed that our method can accurately predict the impact of cache contention and sharing at compile time.

DOI: 10.1109/IPDPSW.2014.73

7 Figures and Tables

Cite this paper

@article{Tolubaeva2014PredictingCC, title={Predicting Cache Contention for Multithread Applications at Compile Time}, author={Munara Tolubaeva and Yonghong Yan and Barbara M. Chapman}, journal={2014 IEEE International Parallel & Distributed Processing Symposium Workshops}, year={2014}, pages={624-631} }