Defect Prediction Using Stylistic Metrics

  title={Defect Prediction Using Stylistic Metrics},
  author={Rafed Muhammad Yasir and Moumita Asad and A. Kabir},
—Defect prediction is one of the most popular research topics due to its potentiality to minimize software quality assurance effort. Existing approaches have examined defect prediction from various perspective such as complexity and developer metrics. However, none of these consider programming style for defect prediction. This paper aims at analyzing the impact of stylistic metrics on both within-project and cross- project defect prediction. For prediction, 4 widely used machine learning… 

Figures and Tables from this paper



Personalized defect prediction

This paper proposes personalized defect prediction-building a separate prediction model for each developer to predict software defects, and applies this approach to classify defects at the file change level.

Progress on approaches to software defect prediction

The authors survey almost 70 representative defect prediction papers in recent years, most of which are published in the prominent software engineering journals and top conferences, and identify some practical guidelines for both software engineering researchers and practitioners in future software defect prediction.

An investigation on the feasibility of cross-project defect prediction

This paper investigates defect predictions in the cross-project context focusing on the selection of training data and proposes an approach to automatically select suitable training data for projects without historical data.

File-Level Defect Prediction: Unsupervised vs. Supervised Models

This work compares the effectiveness of unsupervised and supervised prediction models for effort-aware file-level defect prediction and suggests that not only LOC but also number of files needed to be inspected should be considered when evaluating effort-specific defect prediction models.

Online Defect Prediction for Imbalanced Data

This first study of applying change classification in practice identifies two issues in the prediction process, both of which contribute to the low prediction performance, and applies and adapt online change classification, resampling, and updatable classification techniques to improve the classification performance.

Software Analytics in Practice: A Defect Prediction Model Using Code Smells

The results of the experiments show that code smells metrics are a good indicator of defect proneness of the software product and should be used to train a defect prediction model to guide the software maintenance team.

An analysis of developer metrics for fault prediction

Study of the effects of developer features on software reliability concludes that developer metrics are good predictor of faults and the human factors for improving the software reliability must consider.

Chapter 1 Using Object-Oriented Design Metrics to Predict Software Defects

This study was made possible through the creation of a new metric calculation tool, Ckjm, which calculates metrics that have been recommended as good quality indicators and that have empirically proven their usability in quality or defect prediction.

The Impact of Correlated Metrics on the Interpretation of Defect Models

It is found that correlated metrics have the largest impact on the consistency, the level of discrepancy, and the direction of the ranking of metrics, especially for ANOVA techniques, and that removing all correlated metrics improves the consistency of the produced rankings regardless of the ordering of metrics.

A Comprehensive Investigation of the Role of Imbalanced Learning for Software Defect Prediction

Imbalanced learning should only be considered for moderate or highly imbalanced SDP data sets and the appropriate combination of imbalanced method and classifier needs to be carefully chosen to ameliorate the imbalanced learning problem for SDP.