Top-N accuracy

Top-N accuracy

Top-N accuracy means that the correct class gets to be in the Top-N probabilities for it to count as “correct”. As an example, suppose I have a data set of images and the images are:

Dog
Cat
Dog
Bird
Cat
Cat
Mouse
Penguin

For each of these input images, the model will predict a corresponding class.

Input image: Dog – Predicted class: Dog ✔
Input image: Cat – Predicted class: Bird ✘
Input image: Dog – Predicted class: Dog ✔
Input image: Bird – Predicted class: Bird ✔
Input image: Cat – Predicted class: Cat ✔
Input image: Cat – Predicted class: Cat ✔
Input image: Mouse – Predicted class: Penguin ✘
Input image: Penguin – Predicted class: Dog ✘

The Top-1 accuracy for this is (5 correct out of 8), 62.5%. Now suppose I also list the rest of the classes the model predicted, in descending order of their probabilities (the further right the class appears, the less likely the model thinks the image is that class)

  • Dog => [Dog, Cat, Bird, Mouse, Penguin]
  • Cat => [Bird, Mouse, Cat, Penguin, Dog]
  • Dog => [Dog, Cat, Bird, Penguin, Mouse]
  • Bird => [Bird, Cat, Mouse, Penguin, Dog]
  • Cat => [Cat, Bird, Mouse, Dog, Penguin]
  • Cat => [Cat, Mouse, Dog, Penguin, Bird]
  • Mouse => [Penguin, Mouse, Cat, Dog, Bird]
  • Penguin => [Dog, Mouse, Penguin, Cat, Bird]

If we take the top-3 accuracy for this, the correct class only needs to be in the top three predicted classes to count. As a result, despite the model not perfectly getting every problem, its top-3 accuracy is 100%!