Modeling readability to improve unit tests

@article{Daka2015ModelingRT,
  title={Modeling readability to improve unit tests},
  author={Ermira Daka and Jos{\'e} Campos and Gordon Fraser and Jonathan Dorn and Westley Weimer},
  journal={Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering},
  year={2015}
}
Writing good unit tests can be tedious and error prone, but even once they are written, the job is not done: Developers need to reason about unit tests throughout software development and evolution, in order to diagnose test failures, maintain the tests, and to understand code written by other developers. Unreadable tests are more difficult to maintain and lose some of their value to developers. To overcome this problem, we propose a domain-specific model of unit test readability based on human… 

Figures and Tables from this paper

Improving Readability of Automatically Generated Unit Tests
TLDR
This work proposes a domain-specific model of unit test readability based on human judgments, and uses this model to guide automated unit test generation, which can automatically generate test cases with improved readability with the overall objective of reducing the effort for developers to understand these test cases.
Improving readability in automatic unit test generation
TLDR
A domain-specific model of unit test readability, based on human judgements is used to augment automated unit test generation to produce test suites with both high coverage and improved readability.
An Empirical Investigation on the Readability of Manual and Generated Test Cases
TLDR
It is suggested that developers tend to neglect the readability of test cases and that automatically generated test cases are generally even less readable than manually written ones.
How Do Automatically Generated Unit Tests Influence Software Maintenance?
TLDR
An empirical study in which participants were presented with an automatically generated or manually written failing test, and were asked to identify and fix the cause of the failure, found developers to be equally effective with manually written and automatically generated tests.
Manually Written or Generated Tests?: A Study with Developers and Maintenance Tasks
TLDR
An empirical study with 20 real developers indicates that automatically generated tests can be a great help for identifying faults during maintenance and developers may integrate generated test suites into the project at any stage.
The impact of test case summaries on bug fixing performance: an empirical investigation
TLDR
An approach which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability, is proposed, which can complement the current techniques around automated unit test generation or search-based techniques designed to generate a possibly minimal set of test cases.
Automatically Documenting Unit Test Cases
TLDR
A novel approach - UnitTestScribe - that combines static analysis, natural language processing, backward slicing, and code summarization techniques to automatically generate natural language documentation of unit test cases is proposed.
Generating Readable Unit Tests for Guava
TLDR
This work integrates a further optimization target based on a model of test readability learned from human annotation data that produces more readable unit tests without loss of coverage.
The impact of test case summaries on bug fixing performance: An empirical investigation
TLDR
An approach which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability, is proposed, which can complement the current techniques around automated unit test generation or searchbased techniques designed to generate a possibly minimal set of test cases.
The impact of test case summaries on bug fixing performance: An empirical investigation
TLDR
An approach which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability, is proposed, which can complement the current techniques around automated unit test generation or searchbased techniques designed to generate a possibly minimal set of test cases.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 59 REFERENCES
Scaling up automated test generation: Automatically generating maintainable regression unit tests for programs
TLDR
An automatic technique for generating maintainable regression unit tests for programs that achieves good coverage and mutation kill score, were readable by the product's developers, and required few edits as the system under test evolved.
Learning a Metric for Code Readability
TLDR
An automated readability measure is constructed and shown that it can be 80 percent effective and better than a human, on average, at predicting readability judgments, and correlates strongly with three measures of software quality.
xUnit Test Patterns: Refactoring Test Code
TLDR
xUnit Test Patterns is the definitive guide to writing automated tests using xUnit, the most popular unit testing framework in use today, and describes 68 proven patterns for making tests easier to write, understand, and maintain.
ReAssert: Suggesting Repairs for Broken Unit Tests
TLDR
This work presents ReAssert, a novel technique and tool that suggests repairs to failing tests' code which cause the tests to pass, and shows that it can repair many common test failures.
An empirical study about the effectiveness of debugging when random test cases are used
TLDR
The empirical study is aimed at investigating whether, despite the lack of readability in automatically generated test cases, subjects can still take advantage of them during debugging, and whether this has an impact on accuracy and efficiency of debugging.
A simpler model of software readability
TLDR
This work presents a simple, intuitive theory of readability, based on size and code entropy, and shows how this theory leads to a much sparser, yet statistically significant, model of the mean readability scores produced in Buse's studies.
Refactoring test code
TLDR
It is found that refactoring test code is different from refactored production code in two ways: there is a distinct set of bad smells involved, and improving test code involves additional test-specific refactorings.
Supporting Test Suite Evolution through Test Case Adaptation
TLDR
This paper proposes an approach for automatically repairing and generating test cases during software evolution, using information available in existing test cases, and defines a set of heuristics to repair test cases invalidated by changes in the software.
Efficient unit test case minimization
TLDR
A combination of static slicing and delta debugging that automatically minimizes the sequence of failure-inducing method calls is presented, which improves on the state of the art by being far more efficient.
Mutation-Driven Generation of Unit Tests and Oracles
TLDR
The μtest prototype generates test suites that find significantly more seeded defects than the original manually written test suites, and is optimized toward finding defects modeled by mutation operators rather than covering code.
...
1
2
3
4
5
...