Learn More
Since benchmarks drive computer science research and industry product development, which ones we use and how we evaluate them are key questions for the community. Despite complex runtime tradeoffs due to dynamic compilation and garbage collection required for Java programs, many evaluations still use methodologies developed for C, C++, and Fortran. SPEC,(More)
Evaluation methodology underpins all innovation in experimental computer science. It requires relevant workloads, appropriate experimental design, and rigorous analysis. Unfortunately, methodology is not keeping pace with the changes in our field. The rise of managed languages such as Java, C#, and Ruby in the past decade and the imminent rise of commodity(More)
  • Stephen M Blackburn Α Β, Robin Garner, Chris Hoffmann, Asjad M Khan, Kathryn S Mckinley, Rotem Bentzur +14 others
  • 2006
Since benchmarks drive computer science research and industry product development, which ones we use and how we evaluate them are key questions for the community. Despite complex run-time tradeoffs due to dynamic compilation and garbage collection required for Java programs, many evaluations still use methodolo-gies developed for C, C++, and Fortran. SPEC,(More)
  • Java Benchmarking Development, Analysis, Stephen M Blackburn Α Β, Robin Garner, Chris Hoffmann, Asjad M Khan +16 others
  • 2006
Since benchmarks drive computer science research and industry product development, which ones we use and how we evaluate them are key questions for the community. Despite complex run-time tradeoffs due to dynamic compilation and garbage collection required for Java programs, many evaluations still use method-ologies developed for C, C++, and Fortran. SPEC,(More)
  • 1