How to submit + Code Submission Sample

#15
by picekl - opened

Hi everyone!

Given our previous experience organizing AI challenges and our own efforts in this domain, we know that challenges often result in a hunt for the best numbers. Sometimes even with not-so-fair practices. We do not think overfitting to the leaderboard helps you as a developer, scientist, the end-user, or the community overcome the real issues, e.g., seeking help when bitten by a snake or after eating a poisonous mushroom.

Besides, ensembles or big models are impractical even if you raise the competition metric by a few percent. That's why we designed some "stricter" rules when evaluating your models.

This year all the participants must provide their submissions for the inference that will be evaluated on our private server and over a private test set. All participants must provide access to their (private) GitHub repo that will run the inference over unseen data. The test data will come with the same metadata structure as is in validation metadata; therefore, you can easily test your inference locally before pushing working code to the repository.

SHORT RECAP:
All participants are allowed to submit up to 10 models. Please keep these separated in different branches or as commits with different tags.
Your models must fit limits for memory footprint (max size of 1GB). You can upload them directly to GitHub using git LFS or just "wget" them from the code.

Link to Sample Submission --> https://github.com/picekl/LifeCLEF2023-SampleSubmission

Since this is the first time we are applying these rules, some problems are expected. We will try to help you with them, but we will not babysit you :]

Good luck with your efforts, and have a nice day!
Best,
Lukas, Ray & Marek

Hi Lukas, Ray and Marek,

Thank you for providing the code for the sample submission, it makes things much easier.

However, one question remains for us, whether the inference on your servers will run on a CPU or a CUDA-enabled GPU. If so, it would be good to know what the maximum VRAM capacity of the GPU is and whether it supports mixed-precision techniques so that we can tune our inference scripts accordingly.

Thank you for the clarification.

With kind regards
Team FHDO-BCSG

Hi @BBracke ,

We will most likely run it on a CPU.

We will contact you if something is not as expected or not working. There is no need to stress about it. We want you to submit reasonable-sized "models," therefore the limit.

Best,
Lukas

Sign up or log in to comment