CARAMEL: Detecting and Fixing Performance Problems That Have Non-Intrusive Fixes

@article{Nistor2015CARAMELDA,
  title={CARAMEL: Detecting and Fixing Performance Problems That Have Non-Intrusive Fixes},
  author={Adrian Nistor and Po-Chun Chang and Cosmin Radoi and Shan Lu},
  journal={2015 IEEE/ACM 37th IEEE International Conference on Software Engineering},
  year={2015},
  volume={1},
  pages={902-912}
}
Performance bugs are programming errors that slow down program execution. While existing techniques can detect various types of performance bugs, a crucial and practical aspect of performance bugs has not received the attention it deserves: how likely are developers to fix a performance bug? In practice, fixing a performance bug can have both benefits and drawbacks, and developers fix a performance bug only when the benefits outweigh the drawbacks. Unfortunately, for many performance bugs, the… 
CP-Detector: Using Configuration-related Performance Properties to Expose Performance Bugs
TLDR
This paper argues that the performance expectation of configuration can serve as a strong oracle candidate for performance bug detection and designed and evaluated an automated performance testing framework, CP-DETECTOR, for detecting real-world configuration-related performance bugs.
A Survey Based on Performance-Bug Detection
TLDR
Find that there is little evidence that bug fixes are more likely to introduce new functional bugs to correct for erratic irregularities, which means that developers may not need more attention to correcting the performance of the bugs and the techniques that help developers need to test performance, better test beds and better profiling techniques to find performance bugs.
Mining Fix Patterns for FindBugs Violations
TLDR
To automatically identify patterns in violations and their fixes, this paper proposes an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances, and evaluates the usefulness of the identified fix patterns by applying them to unfixed violations.
Performance problems you can fix: a dynamic analysis of memoization opportunities
TLDR
This paper presents MemoizeIt, a dynamic analysis that identifies methods that repeatedly perform the same computation, a technique called memoization, which leads to statistically significant speedups by factors between 1.04x and 12.93x.
PerfBlower: Quickly Detecting Memory-Related Performance Problems via Amplification
TLDR
PerfBlower provides a novel specification language ISL to describe a general class of performance problems that have observable symptoms and an automated test oracle via virtual amplification, and demonstrates that ISL is expressive enough to describe various memory-related performance problems.
Fixing Resource Leaks in Android Apps with Light-Weight Static Analysis and Low-Overhead Instrumentation
TLDR
This paper presents a light-weight approach to fixing the resource leak bugs that exist widely in Android apps while guaranteeing the safety that the patches should not interrupt normal execution of the original program.
AN EMPIRICAL DESIGN AND CODE METRICS FOR PREDICTION OF SOFTWARE DEFECTS
TLDR
This work built several methods using machine learning algorithms to compare with the proposed novel algorithm, used for defect prediction: C4.5 Decision Trees, Naive Bayes, Bayesian Networks, and Logistic Regression, and shows that the attributes used to predict functional metrics are similar to those used for functional errors.
TANDEM: A Taxonomy and a Dataset of Real-World Performance Bugs
TLDR
This paper proposes a taxonomy of performance bugs based on a thorough systematic review of the related literature, divided into three main categories: effects, causes and contexts of bugs, and provides a complete collection of fully documented real-world performance bugs.
Towards Automated Performance Bug Identification in Python
TLDR
The empirical results show that a C4.5 model, using lines of code changed, file’s age and size as explanatory variables, can be used to predict performance bugs, and it is shown that reducing the number of changes delivered on a commit, can decrease the chance of performance bug injection.
On Automatic Detection of Performance Bugs
TLDR
The empirical results show that a C4.5 model, using lines of code changed, file's age and size as explanatory variables, can be used to predict performance bugs, and it is shown that reducing the number of changes delivered on a commit, can decrease the chance of performance bug injection.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 62 REFERENCES
Toddler: Detecting performance problems via similar memory-access patterns
TLDR
Toddler is presented, a novel automated oracle for performance bugs, which enables testing for performance Bugs to use the well established and automated process of testing for functional bugs, and is implemented for Java.
Understanding and detecting real-world performance bugs
Developers frequently use inefficient code sequences that could be fixed by simple patches. These inefficient code sequences can cause significant performance degradation and resource waste, referred
Performance debugging in the large via mining millions of stack traces
TLDR
To enable performance debugging in the large in practice, a novel approach is proposed, called StackMine, that mines callstack traces to help performance analysts effectively discover highly impactful performance bugs (e.g., bugs impacting many users with long response delay).
Generating Fixes from Object Behavior Anomalies
TLDR
The new PACHIKA tool leverages differences in program behavior to generate program fixes directly, and automatically summarizes executions to object behavior models, determines differences between passing and failing runs, generates possible fixes, and assesses them via the regression test suite.
Statistical debugging for real-world performance problems
TLDR
This study conducts an empirical study to understand how performance problems are observed and reported by real-world users and shows that statistical debugging is a natural fit for diagnosing performance problems, which are often observed through comparison-based approaches and reported together with both good and bad inputs.
Finding latent performance bugs in systems implementations
TLDR
This work presents techniques that can automatically pinpoint latent performance bugs in systems implementations, in the spirit of recent advances in model checking by systematic state space exploration, by automating the process of conducting random simulations, identifying performance anomalies, and analyzing anomalous executions to pinpoint the circumstances leading to performance degradation.
Performance regression testing of concurrent classes
TLDR
SpeedGun is presented, an automatic performance regression testing technique for thread-safe classes to generate multi-threaded performance tests and to compare two versions of a class with each other.
A systematic study of automated program repair: Fixing 55 out of 105 bugs for $8 each
TLDR
This paper evaluates GenProg, which uses genetic programming to repair defects in off-the-shelf C programs, and proposes novel algorithmic improvements that allow it to scale to large programs and find repairs 68% more often.
Performance regression testing target prioritization via performance risk analysis
TLDR
A new lightweight and white-box approach, performance risk analysis (PRA), is proposed to improve performance regression testing efficiency via testing target prioritization and can leverage the analysis result to test commits with high risks first while delaying or skipping testing on low-risk commits.
The road not taken: Estimating path execution frequency statically
TLDR
This work presents a descriptive statistical model of path frequency based on features that can be readily obtained from a program's source code, and demonstrates its robustness by measuring its performance as a static branch predictor, finding it to be more accurate than previous approaches on average.
...
1
2
3
4
5
...