Does Code Review Promote Conformance? A Study of OpenStack Patches

@article{Sriiesaranusorn2021DoesCR,
  title={Does Code Review Promote Conformance? A Study of OpenStack Patches},
  author={Panyawut Sri-iesaranusorn and Raula Gaikovina Kula and Takashi Ishio},
  journal={2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR)},
  year={2021},
  pages={444-448}
}
Code Review plays a crucial role in software quality, by allowing reviewers to discuss and critique any new patches before they can be successfully integrated into the project code. Yet, it is unsure the extent to which coding pattern changes (i.e., repetitive code) from when a patch is first submitted and when the decision is made (i.e., during the review process). In this study, we revisit coding patterns in code reviews, aiming to analyze whether or not the coding pattern changes during the… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 10 REFERENCES
Impact of Coding Style Checker on Code Review - A Case Study on the OpenStack Projects
TLDR
In a case study using an OpenStack code review dataset, it is found that the patch authors have repeatedly introduced the same type of MDIs, while they do not repeat ADIs, which suggests that the introduction of code style checkers might promote thepatch authors' effective potential issues learning.
Does code review really remove coding convention violations?
TLDR
The investigation results highlight that one can speed up the code review process by adopting tools for code convention violation detection and show that convention violations accumulate as code size increases despite changes being reviewed.
Expectations, outcomes, and challenges of modern code review
TLDR
This study reveals that while finding defects remains the main motivation for review, reviews are less about defects than expected and instead provide additional benefits such as knowledge transfer, increased team awareness, and creation of alternative solutions to problems.
Will They Like This? Evaluating Code Contributions with Language Models
TLDR
It is found that rejected change sets do contain code significantly less similar to the project than accepted ones, furthermore, the less similar change sets are more likely to be subject to thorough review.
Mining Co-change Information to Understand When Build Changes Are Necessary
TLDR
This paper builds random forest classifiers using language-agnostic and language-specific code change characteristics to explain when code-accompanying build changes are necessary based on historical trends and indicates that they can accurately explain when build co-changes are necessary.
Improving code completion with program history
TLDR
A benchmarking procedure measuring the accuracy of a code completion engine is defined and applied to several completion algorithms on a dataset consisting of the history of several systems to improve the results offered by code completion tools.
Natural Software Revisited
TLDR
It is found that much of the apparent "naturalness" of source code is due to the presence of language specific syntax, especially separators, such as semi-colons and brackets, in Java.
Cloned Buggy Code Detection in Practice Using Normalized Compression Distance
TLDR
This study developed a tool to detect clones of a faulty code fragment for a software company, since existing code clone detection tools do not fit the requirements of the company.
Syntax errors just aren't natural: improving error reporting with language models
TLDR
A methodology and tool that exploits the naturalness of software source code to detect syntax errors alongside the parser and can effectively augment the syntax error locations produced by the native compiler is provided.
Reviewing proposed changes in a pull request
  • 2020