An empirical study on the effectiveness of data resampling approaches for cross‐project software defect prediction

Cross-project defect prediction (CPDP), where data from different software projects are used to predict defects, has been proposed as a way to provide data for software projects that lack historical data. Evaluations of CPDP models using the Nearest Neighbour (NN) Filter approach have shown promising results in recent studies. A key challenge with defect-prediction datasets is class imbalance, that is, highly skewed datasets where non-buggy modules dominate the buggy modules. In the past, data resampling approaches have been applied to within-projects defect prediction models to help alleviate the negative effects of class imbalance in the datasets. To address the class imbalance issue in CPDP, the authors assess the impact of data resampling approaches on CPDP models after the NN Filter is applied. The impact on prediction performance of five oversampling approaches (MAHAKIL, SMOTE, Borderline-SMOTE, Random Oversampling and ADASYN) and three undersampling approaches (Random Undersampling, Tomek Links and One-sided selection) is investigated and results are compared to approaches without data resampling. The authors examined six defect prediction models on 34 datasets extracted from the PROMISE repository. The authors' results show that there is a significant positive effect of data resampling on CPDP performance, suggesting that software quality teams and researchers should consider applying data resampling approaches for improved recall (pd) and g-measure prediction performance. However, if the goal is to improve precision and reduce false alarm (pf) then data resampling approaches should be avoided.

Saved in:
Bibliographic Details
Main Authors: Bennin, Kwabena Ebo, Tahir, Amjed, Macdonell, Stephen G., Börstler, Jürgen
Format: Article/Letter to editor biblioteca
Language:English
Subjects:class imbalance, defect prediction, software metrics, software quality,
Online Access:https://research.wur.nl/en/publications/an-empirical-study-on-the-effectiveness-of-data-resampling-approa
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Cross-project defect prediction (CPDP), where data from different software projects are used to predict defects, has been proposed as a way to provide data for software projects that lack historical data. Evaluations of CPDP models using the Nearest Neighbour (NN) Filter approach have shown promising results in recent studies. A key challenge with defect-prediction datasets is class imbalance, that is, highly skewed datasets where non-buggy modules dominate the buggy modules. In the past, data resampling approaches have been applied to within-projects defect prediction models to help alleviate the negative effects of class imbalance in the datasets. To address the class imbalance issue in CPDP, the authors assess the impact of data resampling approaches on CPDP models after the NN Filter is applied. The impact on prediction performance of five oversampling approaches (MAHAKIL, SMOTE, Borderline-SMOTE, Random Oversampling and ADASYN) and three undersampling approaches (Random Undersampling, Tomek Links and One-sided selection) is investigated and results are compared to approaches without data resampling. The authors examined six defect prediction models on 34 datasets extracted from the PROMISE repository. The authors' results show that there is a significant positive effect of data resampling on CPDP performance, suggesting that software quality teams and researchers should consider applying data resampling approaches for improved recall (pd) and g-measure prediction performance. However, if the goal is to improve precision and reduce false alarm (pf) then data resampling approaches should be avoided.