The accuracy paradox is the paradoxical finding that accuracy is not a good metric for predictive models when classifying in predictive analytics. This is because a simple model may have a high level of accuracy but be too crude to be useful. For example, if the incidence of category A is dominant, being found in 99% of cases, then predicting that every case is category A will have an accuracy of 99%. Precision and recall are better measures in such cases.[1][2] The underlying issue is that there is a class imbalance between the positive class and the negative class.[3] Prior probabilities for these classes need to be accounted for in error analysis. Precision and recall help, but precision too can be biased by very unbalanced class priors in the test sets.
For example, a city of 1 million people has ten terrorists. A profiling system results in the following confusion matrix:
Predicted class Actual class
|
Fail | Pass | Sum |
---|---|---|---|
Fail | 10 | 0 | 10 |
Pass | 990 | 999000 | 999990 |
Sum | 1000 | 999000 | 1000000 |
Even though the accuracy is 10 + 999000/1000000 ≈ 99.9%, 990 out of the 1000 positive predictions are incorrect. The precision of 10/10 + 990 = 1% reveals its poor performance. As the classes are so unbalanced, a better metric is the F1 score = 2 × 0.01 × 1/0.01 + 1 ≈ 2% (the recall being 10 + 0/10 = 1).
Original source: https://en.wikipedia.org/wiki/Accuracy paradox.
Read more |
Categories: [Statistical paradoxes]