Modeling readability to improve unit tests
@article{Daka2015ModelingRT, title={Modeling readability to improve unit tests}, author={Ermira Daka and Jos{\'e} Campos and Gordon Fraser and Jonathan Dorn and Westley Weimer}, journal={Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering}, year={2015} }
Writing good unit tests can be tedious and error prone, but even once they are written, the job is not done: Developers need to reason about unit tests throughout software development and evolution, in order to diagnose test failures, maintain the tests, and to understand code written by other developers. Unreadable tests are more difficult to maintain and lose some of their value to developers. To overcome this problem, we propose a domain-specific model of unit test readability based on human…
Figures and Tables from this paper
81 Citations
Improving Readability of Automatically Generated Unit Tests
- Computer SciencePPIG
- 2015
This work proposes a domain-specific model of unit test readability based on human judgments, and uses this model to guide automated unit test generation, which can automatically generate test cases with improved readability with the overall objective of reducing the effort for developers to understand these test cases.
Improving readability in automatic unit test generation
- Computer Science
- 2018
A domain-specific model of unit test readability, based on human judgements is used to augment automated unit test generation to produce test suites with both high coverage and improved readability.
An Empirical Investigation on the Readability of Manual and Generated Test Cases
- Computer Science2018 IEEE/ACM 26th International Conference on Program Comprehension (ICPC)
- 2018
It is suggested that developers tend to neglect the readability of test cases and that automatically generated test cases are generally even less readable than manually written ones.
How Do Automatically Generated Unit Tests Influence Software Maintenance?
- Computer Science2018 IEEE 11th International Conference on Software Testing, Verification and Validation (ICST)
- 2018
An empirical study in which participants were presented with an automatically generated or manually written failing test, and were asked to identify and fix the cause of the failure, found developers to be equally effective with manually written and automatically generated tests.
Manually Written or Generated Tests?: A Study with Developers and Maintenance Tasks
- Computer ScienceSBES
- 2020
An empirical study with 20 real developers indicates that automatically generated tests can be a great help for identifying faults during maintenance and developers may integrate generated test suites into the project at any stage.
The impact of test case summaries on bug fixing performance: an empirical investigation
- Computer ScienceICSE 2016
- 2016
An approach which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability, is proposed, which can complement the current techniques around automated unit test generation or search-based techniques designed to generate a possibly minimal set of test cases.
Automatically Documenting Unit Test Cases
- Computer Science2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)
- 2016
A novel approach - UnitTestScribe - that combines static analysis, natural language processing, backward slicing, and code summarization techniques to automatically generate natural language documentation of unit test cases is proposed.
Generating Readable Unit Tests for Guava
- Computer ScienceSSBSE
- 2015
This work integrates a further optimization target based on a model of test readability learned from human annotation data that produces more readable unit tests without loss of coverage.
The impact of test case summaries on bug fixing performance: An empirical investigation
- Computer SciencePeerJ Prepr.
- 2016
An approach which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability, is proposed, which can complement the current techniques around automated unit test generation or searchbased techniques designed to generate a possibly minimal set of test cases.
The impact of test case summaries on bug fixing performance: An empirical investigation
- Computer SciencePeerJ Prepr.
- 2015
An approach which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability, is proposed, which can complement the current techniques around automated unit test generation or searchbased techniques designed to generate a possibly minimal set of test cases.
References
SHOWING 1-10 OF 59 REFERENCES
Scaling up automated test generation: Automatically generating maintainable regression unit tests for programs
- Computer Science2011 26th IEEE/ACM International Conference on Automated Software Engineering (ASE 2011)
- 2011
An automatic technique for generating maintainable regression unit tests for programs that achieves good coverage and mutation kill score, were readable by the product's developers, and required few edits as the system under test evolved.
Learning a Metric for Code Readability
- Computer ScienceIEEE Transactions on Software Engineering
- 2010
An automated readability measure is constructed and shown that it can be 80 percent effective and better than a human, on average, at predicting readability judgments, and correlates strongly with three measures of software quality.
xUnit Test Patterns: Refactoring Test Code
- Biology
- 2007
xUnit Test Patterns is the definitive guide to writing automated tests using xUnit, the most popular unit testing framework in use today, and describes 68 proven patterns for making tests easier to write, understand, and maintain.
ReAssert: Suggesting Repairs for Broken Unit Tests
- Computer Science2009 IEEE/ACM International Conference on Automated Software Engineering
- 2009
This work presents ReAssert, a novel technique and tool that suggests repairs to failing tests' code which cause the tests to pass, and shows that it can repair many common test failures.
An empirical study about the effectiveness of debugging when random test cases are used
- Computer Science2012 34th International Conference on Software Engineering (ICSE)
- 2012
The empirical study is aimed at investigating whether, despite the lack of readability in automatically generated test cases, subjects can still take advantage of them during debugging, and whether this has an impact on accuracy and efficiency of debugging.
A simpler model of software readability
- Computer ScienceMSR '11
- 2011
This work presents a simple, intuitive theory of readability, based on size and code entropy, and shows how this theory leads to a much sparser, yet statistically significant, model of the mean readability scores produced in Buse's studies.
Refactoring test code
- Computer Science
- 2001
It is found that refactoring test code is different from refactored production code in two ways: there is a distinct set of bad smells involved, and improving test code involves additional test-specific refactorings.
Supporting Test Suite Evolution through Test Case Adaptation
- Computer Science2012 IEEE Fifth International Conference on Software Testing, Verification and Validation
- 2012
This paper proposes an approach for automatically repairing and generating test cases during software evolution, using information available in existing test cases, and defines a set of heuristics to repair test cases invalidated by changes in the software.
Efficient unit test case minimization
- Computer ScienceASE '07
- 2007
A combination of static slicing and delta debugging that automatically minimizes the sequence of failure-inducing method calls is presented, which improves on the state of the art by being far more efficient.
Mutation-Driven Generation of Unit Tests and Oracles
- Computer ScienceIEEE Transactions on Software Engineering
- 2012
The μtest prototype generates test suites that find significantly more seeded defects than the original manually written test suites, and is optimized toward finding defects modeled by mutation operators rather than covering code.