Further thoughts on precision
Background: There has been much discussion amongst automated software defect prediction researchers regarding use of the precision and false positive rate classifier performance metrics. Aim: To demonstrate and explain why failing to report precision when using data with highly imbalanced class distributions may provide an overly optimistic view of classifier performance. Method: Well documented examples of how dependent class distribution affects the suitability of performance measures. Conclusions: When using data where the minority class represents less than around 5 to 10 percent of data points in total, failing to report precision may be a critical mistake. Furthermore, deriving the precision values omitted from studies can reveal valuable insight into true classifier performance
Item Type | Article |
---|---|
Date Deposited | 15 May 2025 12:19 |
Last Modified | 30 May 2025 23:48 |
-
picture_as_pdf - 905746.pdf
-
subject - Submitted Version