Leveraging Models to Reduce Test Cases in Software Repositories

@article{Gharachorlu2021LeveragingMT,
  title={Leveraging Models to Reduce Test Cases in Software Repositories},
  author={Golnaz Gharachorlu and Nick Sumner},
  journal={2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR)},
  year={2021},
  pages={230-241}
}
  • Golnaz Gharachorlu, Nick Sumner
  • Published 22 March 2021
  • Computer Science
  • 2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR)
Given a failing test case, test case reduction yields a smaller test case that reproduces the failure. This process can be time consuming due to repeated trial and error with smaller test cases. Current techniques speed up reduction by only exploring syntactically valid candidates, but they still spend significant effort on semantically invalid candidates. In this paper, we propose a model-guided approach to speed up test case reduction. The approach trains a model of semantic properties driven… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 27 REFERENCES
Avoiding the Familiar to Speed Up Test Case Reduction
TLDR
This work explores the possibility that good test case reduction can be achieved without revisiting, yielding an O(n) algorithm, and shows that on a suite of large fuzzer-generated test cases for compilers, the O( n) approach yields reduced test cases with similar size, while decreasing the reduction time by 65% on average.
Pardis : Priority Aware Test Case Reduction
TLDR
Pardis is proposed, a technique for priority aware test case reduction that avoids priority inversion and is able to reduce test cases 1.3x to 7.8x faster and with 46% to 80% fewer queries.
Test-case reduction for C compiler bugs
TLDR
It is concluded that effective program reduction requires more than straightforward delta debugging, so three new, domain-specific test-case reducers are designed and implemented based on a novel framework in which a generic fixpoint computation invokes modular transformations that perform reduction operations.
Modernizing hierarchical delta debugging
TLDR
It is argued that using extended context-free grammars with HDD is beneficial in several ways and the experimental evaluation of the modernized HDD implementation, called Picireny, supports the outlined ideas: the reduced outputs are significantly smaller on the investigated test cases than those produced by the reference HDD implementation using standard context- Free Grammars.
Practical Improvements to the Minimizing Delta Debugging Algorithm
TLDR
This paper investigates how the well-known minimizing Delta Debugging algorithm performs nowadays, especially (but not exclusively) focusing on its parallelization potential, and presents new improvement ideas and gives algorithm variants formally and in pseudo-code.
Automatically reducing tree-structured test inputs
TLDR
The GTR algorithm is presented, an effective and efficient technique to reduce arbitrary test inputs that can be represented as a tree, such as program code, PDF files, and XML documents, and automatically specializes the tree transformations applied by the algorithm based on examples of input trees.
Finding and understanding bugs in C compilers
TLDR
Csmith, a randomized test-case generation tool, is created and spent three years using it to find compiler bugs, and a collection of qualitative and quantitative results about the bugs it found are presented.
Simplifying and Isolating Failure-Inducing Input
TLDR
The delta debugging algorithm generalizes and simplifies the failing test case to a minimal test case that still produces the failure, and isolates the difference between a passing and a failingTest case.
HDDr: a recursive variant of the hierarchical Delta debugging algorithm
TLDR
This paper builds on and improves the hierarchical minimization algorithm and experiments with a recursive variant called HDDr, which can give minimal results in 29–65% less time than the baseline hierarchical algorithm.
Perses: Syntax-Guided Program Reduction
TLDR
The key insight is to exploit, in a general manner, the formal syntax of the programs under reduction and ensure that each reduction step considers only smaller, syntactically valid variants to avoid futile efforts on Syntactically invalid variants.
...
...