Automated detection of performance regressions: the mono experience

@article{Kalibera2005AutomatedDO,
  title={Automated detection of performance regressions: the mono experience},
  author={Tomas Kalibera and Lubom{\'i}r Bulej and Petr Tuma},
  journal={13th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems},
  year={2005},
  pages={183-190}
}
  • T. Kalibera, L. Bulej, P. Tuma
  • Published 27 September 2005
  • Computer Science
  • 13th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems
Engineering a large software project involves tracking the impact of development and maintenance changes on the software performance. An approach for tracking the impact is regression benchmarking, which involves automated benchmarking and evaluation of performance at regular intervals. Regression benchmarking must tackle the nondeterminism inherent to contemporary computer systems and execution environments and the impact of the nondeterminism on the results. On the example of a fully… Expand
Mono Regression Benchmarking
Regression benchmarking is a methodology for detecting performance changes in software by periodic benchmarking. Detecting performance regressions in particular helps to improve software quality,Expand
Mining Performance Regression Testing Repositories for Automated Performance Analysis
TLDR
An automated approach to detect potential performance regressions in a performance regression test is presented, and case studies show that the approach scales well to large industrial systems, and detects performance problems that are often overlooked by performance analysts. Expand
Perphecy: Performance Regression Test Selection Made Simple but Effective
TLDR
This paper presents an approach to performance regression detection that is simple and general, and that works surprisingly well for real applications. Expand
PReT: A Tool for Automatic Phase-Based Regression Testing
TLDR
The usefulness of PReT is shown in correctly identifying two real world performance bugs in Cassandra database server and is able to characterize the performance tests being run for the software with higher accuracy than a purely resource utilization based characterization technique. Expand
Automated benchmarking and analysis tool
TLDR
The result of project BEEN is presented, a generic tool for automated benchmarking in a heterogeneous distributed environment that automates all steps of a benchmark experiment from software building and deployment through measurement and load monitoring to the evaluation of results. Expand
Measuring and Predicting Computer Software Performance: Tools and Approaches
TLDR
Two performance prediction techniques are introduced: one to predict when code changes will cause performance changes during software development, and another to predict performance metrics on unavailable platforms using benchmark-based statistical models. Expand
Automated root cause isolation in performance regression testing
TLDR
The presented approach will use load tests in order to observe and analyze the performance of a system, like e.g. the response times of methods, to detect performance regressions, deteriorations in performance, between versions. Expand
Automated analysis of load testing results
TLDR
This dissertation proposes automated approaches to detect functional and performance problems in a load test by mining the recorded load testing data (execution logs and performance metrics) and outputs high precision results that help analysts detect load testing problems. Expand
SPL : Unit Testing Performance
Unit testing is an attractive quality management tool in the software development process, however, practical obstacles make it difficult to use unit tests for performance testing. We presentExpand
Mining Performance Regression Inducing Code Changes in Evolving Software
During software evolution, the source code of a system frequently changes due to bug fixes or new feature requests. Some of these changes may accidentally degrade performance of a newly releasedExpand
...
1
2
3
4
...

References

SHOWING 1-10 OF 18 REFERENCES
Quality Assurance in Performance: Evaluating Mono Benchmark Results
TLDR
Regression benchmarking provides means for an automated monitoring of performance, yielding a list of software modifications potentially associated with performance changes, and presents three methods that help narrow down the list of modifications possibly associated with a performance change. Expand
Mono Regression Benchmarking
Regression benchmarking is a methodology for detecting performance changes in software by periodic benchmarking. Detecting performance regressions in particular helps to improve software quality,Expand
Repeated results analysis for middleware regression benchmarking
TLDR
The paper outlines the concept of regression benchmarking as a variant of regression testing focused at detecting performance regressions and proposes novel techniques for the repeated analysis of results for the purpose of detecting performance regressionions. Expand
Regression benchmarking with simple middleware benchmarks
  • L. Bulej, T. Kalibera, P. Tuma
  • Computer Science
  • IEEE International Conference on Performance, Computing, and Communications, 2004
  • 2004
TLDR
The paper shows why the existing benchmarks do not give results sufficient for regression benchmarking, and proposes techniques for detecting performance regressions using simple benchmarks. Expand
Benchmark Precision and Random Initial State
TLDR
A method for quantitatively assessing the influence of nondeterminism on a benchmark, as well as an approach that provides a plausible estimate of result precision in face of the nond determinism are suggested. Expand
CORBA benchmarking: a course with hidden obstacles
TLDR
This work points out common causes of imprecision related to the gathering of timing information and the effects of warm-up, randomization, cross talk and delayed or hidden functionality in the performance of CORBA middleware. Expand
Extreme Programming Installed
From the Book: Preface How much would you pay for a software development team that would do what you want? Wait, don't answer yet—what if they could also tell you how much it would cost, so that youExpand
MCLUST: Software for Model-Based Clustering, Density Estimation and Discriminant Analysis
Abstract : MCLUST is a software package for model-based clustering, density estimation and discriminant analysis interfaced to the S-PLUS commercial software. It implements parameterized GaussianExpand
Enhanced Model-Based Clustering, Density Estimation, and Discriminant Analysis Software: MCLUST
MCLUST is a software package for model-based clustering, density estimation and discriminant analysis interfaced to the S-PLUS commercial software and the R language. It implements parameterizedExpand
Model-Based Clustering, Discriminant Analysis, and Density Estimation
TLDR
This work reviews a general methodology for model-based clustering that provides a principled statistical approach to important practical questions that arise in cluster analysis, such as how many clusters are there, which clustering method should be used, and how should outliers be handled. Expand
...
1
2
...