Automated detection of performance regressions: the mono experience

@article{Kalibera2005AutomatedDO,
  title={Automated detection of performance regressions: the mono experience},
  author={Tomas Kalibera and Lubom{\'i}r Bulej and Petr Tuma},
  journal={13th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems},
  year={2005},
  pages={183-190}
}
  • T. KaliberaL. BulejP. Tuma
  • Published 27 September 2005
  • Computer Science
  • 13th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems
Engineering a large software project involves tracking the impact of development and maintenance changes on the software performance. An approach for tracking the impact is regression benchmarking, which involves automated benchmarking and evaluation of performance at regular intervals. Regression benchmarking must tackle the nondeterminism inherent to contemporary computer systems and execution environments and the impact of the nondeterminism on the results. On the example of a fully… 

Figures from this paper

Mono Regression Benchmarking

The method for detecting performance regressions is described, which defines the benchmark precision using the width of the confidence interval for the mean benchmark response time and bases the changes detection on non–overlapping of these confidence intervals.

Mining Performance Regression Testing Repositories for Automated Performance Analysis

An automated approach to detect potential performance regressions in a performance regression test is presented, and case studies show that the approach scales well to large industrial systems, and detects performance problems that are often overlooked by performance analysts.

Perphecy: Performance Regression Test Selection Made Simple but Effective

This paper presents an approach to performance regression detection that is simple and general, and that works surprisingly well for real applications.

PReT: A Tool for Automatic Phase-Based Regression Testing

The usefulness of PReT is shown in correctly identifying two real world performance bugs in Cassandra database server and is able to characterize the performance tests being run for the software with higher accuracy than a purely resource utilization based characterization technique.

Automated benchmarking and analysis tool

The result of project BEEN is presented, a generic tool for automated benchmarking in a heterogeneous distributed environment that automates all steps of a benchmark experiment from software building and deployment through measurement and load monitoring to the evaluation of results.

Measuring and Predicting Computer Software Performance: Tools and Approaches

Two performance prediction techniques are introduced: one to predict when code changes will cause performance changes during software development, and another to predict performance metrics on unavailable platforms using benchmark-based statistical models.

Automated root cause isolation in performance regression testing

The presented approach will use load tests in order to observe and analyze the performance of a system, like e.g. the response times of methods, to detect performance regressions, deteriorations in performance, between versions.

Automated analysis of load testing results

This dissertation proposes automated approaches to detect functional and performance problems in a load test by mining the recorded load testing data (execution logs and performance metrics) and outputs high precision results that help analysts detect load testing problems.

SPL : Unit Testing Performance

Stochastic Performance Logic is presented, a formalism for expressing performance requirements, together with interpretations that facilitate performance evaluation in the unit test context to demonstrate the ability to reflect typical developer concerns related to performance.

Mining Performance Regression Inducing Code Changes in Evolving Software

A novel recommendation system, coined as PerfImpact, for automatically identifying code changes that may potentially be responsible for performance regressions using a combination of search-based input profiling and change impact analysis techniques is proposed.

References

SHOWING 1-10 OF 18 REFERENCES

Quality Assurance in Performance: Evaluating Mono Benchmark Results

Regression benchmarking provides means for an automated monitoring of performance, yielding a list of software modifications potentially associated with performance changes, and presents three methods that help narrow down the list of modifications possibly associated with a performance change.

Mono Regression Benchmarking

The method for detecting performance regressions is described, which defines the benchmark precision using the width of the confidence interval for the mean benchmark response time and bases the changes detection on non–overlapping of these confidence intervals.

Repeated results analysis for middleware regression benchmarking

Regression benchmarking with simple middleware benchmarks

  • L. BulejT. KaliberaP. Tuma
  • Computer Science
    IEEE International Conference on Performance, Computing, and Communications, 2004
  • 2004
The paper shows why the existing benchmarks do not give results sufficient for regression benchmarking, and proposes techniques for detecting performance regressions using simple benchmarks.

Benchmark Precision and Random Initial State

A method for quantitatively assessing the influence of nondeterminism on a benchmark, as well as an approach that provides a plausible estimate of result precision in face of the nond determinism are suggested.

CORBA benchmarking: a course with hidden obstacles

This work points out common causes of imprecision related to the gathering of timing information and the effects of warm-up, randomization, cross talk and delayed or hidden functionality in the performance of CORBA middleware.

Extreme Programming Installed

This book describes what makes XP work, day to day and monthto month, and what the XP practices are, and how to install them in your project.

MCLUST: Software for Model-Based Clustering, Density Estimation and Discriminant Analysis

Abstract : MCLUST is a software package for model-based clustering, density estimation and discriminant analysis interfaced to the S-PLUS commercial software. It implements parameterized Gaussian

Model-Based Clustering, Discriminant Analysis, and Density Estimation

This work reviews a general methodology for model-based clustering that provides a principled statistical approach to important practical questions that arise in cluster analysis, such as how many clusters are there, which clustering method should be used, and how should outliers be handled.

Scimark – c

  • http://rotor.cs. cornell.edu/SciMark/,
  • 2004