Automatic test improvement with DSpot: a study with ten mature open-source projects

@article{Danglot2019AutomaticTI,
  title={Automatic test improvement with DSpot: a study with ten mature open-source projects},
  author={Benjamin Danglot and Oscar Luis Vera-P{\'e}rez and Beno{\^i}t Baudry and Monperrus Martin},
  journal={Empirical Software Engineering},
  year={2019},
  pages={1-33}
}
In the literature, there is a rather clear segregation between manually written tests by developers and automatically generated ones. In this paper, we explore a third solution: to automatically improve existing test cases written by developers. We present the concept, design and implementation of a system called DSpot, that takes developer-written test cases as input (JUnit tests in Java) and synthesizes improved versions of them as output. Those test improvements are given back to developers… 
Developer-centric test amplification
TLDR
This paper conducts 16 semi-structured interviews with software developers supported by the prototypical designs of a developer-centric test amplification approach and a corresponding test exploration tool, and extends the test amplification tool DSpot, generating test cases that are easier to understand.
Automatic Unit Test Amplification For DevOps
TLDR
This thesis aims at addressing the lack of a tool that assists developers in regression testing by using test suite amplification, and proposes a new approach based on both test inputs transformation and assertions generation to amplify the test suite.
Small-Amp: Test Amplification in a Dynamically Typed Language
TLDR
This work proposes to exploit profiling information —readily obtainable by executing the associated test suite— to infer the necessary type information creating special test inputs with corresponding assertions, concluding that test amplification is feasible for dynamically typed languages.
Removing Redundant Statements in Amplified Test Cases
TLDR
This work designs and evaluates a static analysis approach that is implemented as part of the test amplification tool DSpot and shows that it works well: while being rudimentary, it is able to remove a significant portion of the redundant statements in the amplified test cases.
An approach and benchmark to detect behavioral changes of commits in continuous integration
TLDR
The approach DCI (Detecting behavioral changes in CI) works by generating variations of the existing test cases through assertion amplification and a search-based exploration of the input space and is able to generate test methods that detect behavioral changes.
AmPyfier: Test Amplification in Python
TLDR
AmPyfier is presented, a proof-of-concept tool, which brings test amplification to the dynamically typed, interpreted language Python, and it is demonstrated that test amplification is feasible for one of the most popular programming languages in use today.
Production Monitoring to Improve Test Suites
TLDR
An approach called PANKTI is devised which monitors applications as they execute in production, and then automatically generates unit tests from the collected production data, and shows that the generated tests indeed improve the quality of the test suite of the application under consideration.
Suggestions on Test Suite Improvements with Automatic Infection and Propagation Analysis
TLDR
Reneri, a tool that observes the program under test and the test suite in order to determine runtime differences between test runs on the original and the transformed method, is developed and processed by Reneri to suggest possible improvements to the developers.
A comparative study of test code clones and production code clones
A platform for diversity-driven test amplification
TLDR
This paper introduces a new approach that uses the seed tests to search for existing redundant implementations of the software under test and leverages them as oracles in the generation and evaluation of new tests.
...
...

References

SHOWING 1-10 OF 37 REFERENCES
Does Automated Unit Test Generation Really Help Software Testers? A Controlled Empirical Study
TLDR
It is found that, on one hand, tool support leads to clear improvements in commonly applied quality metrics such as code coverage (up to 300% increase), however, on the other hand, there was no measurable improvement in the number of bugs actually found by developers.
Leveraging existing tests in automated test generation for web applications
TLDR
This paper proposes to mine the human knowledge present in the form of input values, event sequences, and assertions, in the human-written test suites, and combine that inferred knowledge with the power of automated crawling, and extend the test suite for uncovered/unchecked portions of the web application under test.
Augmenting Automatically Generated Unit-Test Suites with Regression Oracle Checking
TLDR
Results show that an automatically generated test suite's fault-detection capability can be effectively improved after being augmented by Orstra, and the augmented test suite has an improved capability of guarding against regression faults.
DART: directed automated random testing
TLDR
DART is a new tool for automatically testing software that combines three main techniques, automated extraction of the interface of a program with its external environment using static source-code parsing, and dynamic analysis of how the program behaves under random testing and automatic generation of new test inputs to direct systematically the execution along alternative program paths.
Generating Effective Integration Test Cases from Unit Ones
TLDR
A new technique to generate integration test cases that leverages existing unit test cases is presented and it is shown that the generated test cases can find interesting faults, compared to test suites generated with state of the art approaches.
Isomorphic regression testing: executing uncovered branches without test augmentation
TLDR
This paper makes the first implementation of isomorphic regression testing through an approach named ISON, which creates program variants by negating branch conditions and detects a number of faults not detected by a popular automated test generation tool under the scenario of regression testing.
Whole Test Suite Generation
TLDR
This work proposes a novel paradigm in which whole test suites are evolved with the aim of covering all coverage goals at the same time while keeping the total size as small as possible, and implemented this novel approach in the EvoSuite tool.
A novel co-evolutionary approach to automatic software bug fixing
  • A. Arcuri, X. Yao
  • Computer Science
    2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence)
  • 2008
TLDR
This paper proposes an evolutionary approach to automate the task of fixing bugs based on co-evolution, in which programs and test cases co- evolve, influencing each other with the aim of fixing the bugs of the programs.
...
...