metrics

#2
by mrzave - opened
  1. how do you calculate the metrics for the public scores track1 to track4? Are lower values better or worse?
  2. for public_score, is it showing the accuracy or error rate?

Hi @mrzave ,

I appreciate your interest in the competition. The metrics are described under the Dataset Tab (Section Evaluation Process). You have to scroll a little bit.
I hope this helps. Let me know in case there is something unclear.

Best,
Lukas

Hi @picekl

I'm sorry for not providing more complete information. I've read the Dataset page but still unsure why the metrics are shown like that. For example, the page shows the losses as sum of all the costs but the leaderboard shows it as a fraction, so would I be dividing the sum with the total number of samples or the total number of costs?

In addition, It seems like a higher public_score would place you in higher rank, so I would assume that public_score is the accuracy, but the dataset page stated that it is the error rate, so I'm confused about which one

Since the public_score represents accuracy (please correct me ) and rest of the tracks represent losses, so am i right to say that higher public_score and lower public_score_tracks would net you better result?

Hi @mrzave ,

You are right. The first metric is accuracy-based; thus, higher == better. Rest is the opposite.

However, leaderboard ranking will not matter as much as you might think.
We will provide 3 * 500E to the best papers, i.e., the best approaches described in a technical report and with reproducible results.

Best,
Lukas

Thank you for the reply @picekl , that clears things up

Sign up or log in to comment