Say you are classifying 100 examples of fruit and there are 99 oranges and one lime. If your model predicted all 100 are oranges, then it is 99% accurate. But the high accuracy veils the model’s inability to detect the difference between oranges and limes. Changing the break point for prediction confidence is a way to improve a model’s usefulness when column values have imbalanced frequencies like this.
Classification models compute a factor called “F1” to account for this behavior. F1 is a score between 1 (best) and zero (Prediction Cut-offworst) that shows classification ability. It is simply a measure of how well the model performed at identifying the differences among groups.
Knowing why models classify the way they do is at the heart of explainability. Fortunately, you don’t need to understand the equations to see what F1 reveals about classification performance. If you are really curious, the Max F1 post in Squark’s AI Glossary explains the math and how the maximum F1 value is used in Squark classification models.
Judah Phillips