The Integration of Machine Learning into Automated Test Generation: A Systematic Literature Review

  title={The Integration of Machine Learning into Automated Test Generation: A Systematic Literature Review},
  author={Afonso Fontes and Gregory Gay},
Context: Machine learning (ML) may enable effective automated test generation. Objectives: We characterize emerging research, examining testing practices, researcher goals, ML techniques applied, evaluation, and challenges. Methods: We perform a systematic literature review on a sample of 97 publications. Results: ML generates input for system, GUI, unit, performance, and combinatorial testing or improves the performance of existing generation methods. ML is also used to generate test verdicts… 



Using machine learning to generate test oracles: a systematic literature review

Emerging research in this area is characterized through a systematic literature review examining oracle types, researcher goals, the ML techniques applied, how the generation process was assessed, and the open research challenges in this emerging field.

Machine learning-assisted performance testing

This research has used model-free reinforcement learning to build a self-adaptive autonomous stress testing framework which is able to learn the optimal policy for stress test case generation without having a model of the system under test.

Generating Test Input with Deep Reinforcement Learning

This paper reformulates the software under test (SUT) as an environment of reinforcement learning and presents GunPowder, a novel framework for SBST which extends SUT to the environment and reveals light on the future integration of deep neural network and SBST.

Supervised learning over test executions as a test oracle

The paper aims at solving the test oracle problem in a scalable and accurate way by using supervised learning over test execution traces to train a neural network (NN) model to distinguish runtime patterns for passing versus failing executions for a given program.

Automated Performance Testing Based on Active Deep Learning

An automated test generation method called ACTA for black-box performance testing based on active learning, which means that it does not require a large set of historical test data to learn about the performance characteristics of the system under test, and dynamically chooses the tests to execute using uncertainty sampling.

Machine Learning to Guide Performance Testing: An Autonomous Test Framework

This work-in-progress paper proposes a self-adaptive learning-based test framework which learns how to apply stress testing as one aspect of performance testing on various software systems to find the performance breaking point.

Artificial neural networks as multi-networks automated test oracle

Multi-Networks Oracles based on Artificial Neural Networks are introduced to handle the mapping automatically and show the proposed oracle has better quality and applicability than the previous model.

Automatically learning usage behavior and generating event sequences for black-box testing of reactive systems

This work combines functional testing inputs that are automatically generated from a model together with manually-applied test cases for robustness testing to train a long short-term memory (LSTM) network, which outperforms random testing in terms of both fault coverage and execution time.

A classifier-based test oracle for embedded software

A new black box solution to construct automated oracles which can be applied to embedded software and other programs with low observability is proposed which merely requires program's input values as well as corresponding pass/fail outcome, as the training set.

Quickly Generating Diverse Valid Test Inputs with Reinforcement Learning

This paper formalizes the problem of guiding random input generators towards producing a diverse set of valid inputs and proposes a solution based on reinforcement learning (RL), using a tabular, on-policy RL approach to guide the generator.