There Is this “F1” Thing

Why 50% probability isn’t always always the prediction cut-off.

Say you are classifying 100 examples of fruit and there are 99 oranges and one lime. If your model predicted all 100 are oranges, then it is 99% accurate. But the high accuracy veils the model’s inability to detect the difference between oranges and limes. Changing the break point for prediction confidence is a way to improve a model’s usefulness when column values have imbalanced frequencies like this.

Classification models compute a factor called “F1”  to account for this behavior. F1 is a score between 1 (best) and zero (Prediction Cut-offworst) that shows classification ability. It is simply a measure of how well the model performed at identifying the differences among groups.

How Does It Work?

Knowing why models classify the way they do is at the heart of explainability. Fortunately, you don’t need to understand the equations to see what F1 reveals about classification performance. If you are really curious, the Max F1 post in Squark’s AI Glossary explains the math and how the maximum F1 value is used in Squark classification models.

Recent Posts

GPT and Predictive Analytics AI
navigating ai

Squark is a no-code AI as a Service platform that helps data-literate business users make better decisions with their data. Squark is used across a variety of industries & use cases to uncover AI-driven insights from tabular and textual data, prioritize decisions, and take informed action. The Squark platform is designed to be easy to use, accurate, scalable, and secure.

Copyright © 2023 Squark. All Rights Reserved | Privacy Policy