The Emerging Field of Test Amplification: A Survey

@article{Danglot2017TheEF,
  title={The Emerging Field of Test Amplification: A Survey},
  author={Benjamin Danglot and Oscar Luis Vera-P{\'e}rez and Zhongxing Yu and Monperrus Martin and Beno{\^i}t Baudry},
  journal={ArXiv},
  year={2017},
  volume={abs/1705.10692}
}

Tables from this paper

Automatic test improvement with DSpot: a study with ten mature open-source projects
TLDR
This paper presents the concept, design and implementation of a system, that takes developer-written test cases as input (JUnit tests in Java) and synthesizes improved versions of them as output and shows that DSpot is capable of automatically improving unit-tests in real-world, large scale Java software.
How Developers Engineer Test Cases: An Observational Study
TLDR
NATHIC, an approach to generate names for amplified test cases based on the methods they additionally cover, compared to the existing test suite is presented and it is shown that the test names generated by NATIC are valued similarly to names written by experts.
Can We Increase the Test-coverage in Libraries using Dependent Projects’ Test-suites?
TLDR
The potential of using tests from dependent projects to increase the code coverage of base packages is explored and a tool which would generate tests for the base package based on the tests in the dependent projects, would help to strengthen the test suite.
Practical Amplification of Condition/Decision Test Coverage by Combinatorial Testing
  • A. Andrzejak, Thomas Bach
  • Computer Science
    2018 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)
  • 2018
TLDR
An approach which combines combinatorial testing and input space modeling to further increase the degree of the C/D coverage is introduced, and it is demonstrated that it is possible to generate from integration tests new suites of unit tests with highC/D-coverage but with only few test cases.
Small-Amp: Test Amplification in a Dynamically Typed Language
TLDR
This work proposes to exploit profiling information —readily obtainable by executing the associated test suite— to infer the necessary type information creating special test inputs with corresponding assertions, concluding that test amplification is feasible for dynamically typed languages.
Enhancing POI Testing Through the Use of Additional Information
TLDR
This paper presents a method to improve POI testing by including additional context information for a certain type of POIs, which enables new comparison modes and a categorization of unexpected behaviours.
A Method for Finding Missing Unit Tests
  • Daniel Gaston, J. Clause
  • Computer Science
    2020 IEEE International Conference on Software Maintenance and Evolution (ICSME)
  • 2020
TLDR
This work is able to find what code is missing tests by identifying code entities which are not tested in the same way as other similar entities and shows how a code entity with a missing test should be tested by leveraging the tests written for those similar entities.
AmPyfier: Test Amplification in Python
TLDR
AmPyfier is presented, a proof-of-concept tool, which brings test amplification to the dynamically typed, interpreted language Python, and it is demonstrated that test amplification is feasible for one of the most popular programming languages in use today.
Type Profiling to the Rescue: Test Amplification in Python and Smalltalk
TLDR
The AnSyMo research group has created two proof of concept tools for languages without a static type system: AmPyfier and Small-Amp and this tool demonstration paper explains how they relied on profiling libraries present in the respective eco-systems to infer the necessary type information for enabling full-blown test amplification.
Deviation Testing: A Test Case Generation Technique for GraphQL APIs
TLDR
This work proposes a simple but expressive technique called deviation testing that automatically searches for anomalies in the way a schema is served and demonstrates the feasibility of this approach using an implementation of GraphQL for Pharo and VisualWorks.
...
...

References

SHOWING 1-10 OF 55 REFERENCES
Shadow of a Doubt: Testing for Divergences between Software Versions
TLDR
A symbolic execution-based technique that is designed to generate test inputs that cover the new program behaviours introduced by a patch and evaluated on the Coreutils patches from the CoREBench suite of regression bugs shows that it is able to generatetest inputs that exercise newly added behaviours and expose some of the regression bugs.
Leveraging existing tests in automated test generation for web applications
TLDR
This paper proposes to mine the human knowledge present in the form of input values, event sequences, and assertions, in the human-written test suites, and combine that inferred knowledge with the power of automated crawling, and extend the test suite for uncovered/unchecked portions of the web application under test.
Mutation-oriented test data augmentation for GUI software fault localization
KATCH: high-coverage testing of software patches
TLDR
The results show that KATCH can automatically synthesise inputs that significantly increase the patch coverage achieved by the existing manual test suites, and find bugs at the moment they are introduced.
Cross-checking oracles from intrinsic software redundancy
TLDR
An experimental evaluation shows that cross-checking oracles, used in combination with automatic test generation techniques, can be very effective in revealing faults, and that they can even improve good hand-written test suites.
Applying aggressive propagation-based strategies for testing changes
TLDR
This paper presents a new and more efficient approach for propagation-based testing of changes that can reach much longer propagation-distances and can focus the testing more precisely on those behaviors of change that can actually affect the output.
Test-Suite Augmentation for Evolving Software
TLDR
The results show that the proposed MATRIX technique is practical and more effective than existing test-suite augmentation approaches in identifying test cases with high fault-detection capabilities.
Predictive testing: amplifying the effectiveness of software testing
TLDR
A novel technique is proposed that leverages the results of unit testing to hoist assertions located deep inside the body of a unit function to the beginning of the unit function, which enables predictive testing to encounter assertions more often in test executions and thereby significantly amplifies the effectiveness of testing.
A Survey on Automatic Test Data Generation
TLDR
This article presents a survey on automatic test data generation techniques that can be found in current literature, and the focus of this article is program-based generation, where the generation starts from the actual programs.
Eclat: Automatic Generation and Classification of Test Inputs
TLDR
A technique that selects, from a large set of test inputs, a small subset likely to reveal faults in the software under test, which is seen as an error-detection technique and implemented in the Eclat tool.
...
...