• AIPressRoom
  • Posts
  • Understanding AUC Scores in Depth: What’s the Level? | by Maham Haroon | Sep, 2023

Understanding AUC Scores in Depth: What’s the Level? | by Maham Haroon | Sep, 2023

Exploring different metrics alongside for deeper insights

Good day there!

In the present day, we’re delving into a selected metrics used for evaluating mannequin efficiency — the AUC rating. However earlier than we delve into the specifics, have you ever ever puzzled why unintuitive scores are at occasions essential to assess the efficiency of our fashions?

Whether or not our mannequin handles a single class or a number of lessons, the underlying goal stays fixed: optimizing correct predictions whereas minimizing incorrect ones. To discover this fundamental goal, let’s first take a look at the compulsory confusion matrix encompassing True Positives, False Positives, True Negatives, and False Negatives.

For any classification or prediction downside, there are solely two outcomes: True or False.

Consequently, each metric designed to gauge the efficiency of a prediction or classification algorithm is based on these two measures. The only metric that accomplishes that is Accuracy.

Accuracy

In context of classification and prediction accuracy signifies the proportion of appropriately predicted cases amongst the full. It’s a really easy and intuitive measure of a mannequin’s predictive efficiency.

Nevertheless, is accuracy really adequate?

Whereas accuracy is an effective normal measure of a fashions efficiency, it’s inadequacy turns into evident after we look at the desk beneath that we are going to regularly reference on this article. The desk exhibits efficiency metrics of 4 fashions, every with considerably suboptimal outcomes however, all these fashions exhibit excessive accuracy. As an illustration within the first and second case, there’s a transparent bias in the direction of one class, leading to dismal classification for the much less widespread class but the accuracy is 90% which is kind of deceptive.