MSR 2025
Mon 28 - Tue 29 April 2025 Ottawa, Ontario, Canada
co-located with ICSE 2025

Just-in-time defect prediction (JIT DP) leverages machine learning to identify defect-prone code commits, enabling quality assurance (QA) teams to allocate resources more efficiently by focusing on commits that are most likely to contain defects.

Although JIT defect prediction techniques have introduced notable improvements in terms of predictive accuracy, they are still susceptible to misclassification errors such as false positives and false negatives. This can lead to wasted resources or undetected defects, a particularly critical concern when QA resources are limited.

To mitigate these challenges and preserve the practical utility of JIT defect prediction tools, it becomes essential to estimate the reliability of the predictions, i.e., computing confidence scores. Such scores can help practitioners identify predictions that are most likely to be correct and thus prioritize them efficiently.

A simple approach to computing confidence scores is to extract, alongside each prediction, the corresponding prediction probabilities and use them as indicators of confidence. However, for these probabilities to reliably serve as confidence scores, the predictive model must be well-calibrated. This means that the prediction probabilities must accurately represent the true likelihood of each prediction being correct.

Miscalibration, common in modern machine learning models, distorts probability scores such that model’s predictions probabilities do not align with the actual probability of those predictions being correct; hence leading to poor prioritization and resource allocation.

Despite its importance, model calibration has been largely overlooked in JIT defect prediction. In this study, we evaluate the calibration of several state-of-the-art JIT defect prediction techniques to determine whether and to what extent they exhibit poor calibration. Furthermore, we assess whether post-calibration methods can improve the calibration of existing JIT defect prediction models.

Our experimental analysis reveals that all evaluated JIT DP models exhibit some level of miscalibration, with Expected Calibration Error (ECE) ranging from 7% to 35%. Furthermore, post-calibration methods do not consistently improve the calibration of these JIT DP models.