Is Slow Code an Issue for Final Evaluation?

My code takes about 1 second per data file after adding last minute changes to improve the score without considering how long it takes to run. I’m able to submit the docker image and get a public score on the test dataset, but the full dataset for final evaluation probably has a lot more files.

Will this be an issue for the final leaderboard evaluation?

Hi @Ben_Swain,

Submissions are not penalized based on their evaluation time. However, each submission is allocated 35 minutes to complete the prediction. If a submission exceeds this time limit, it will be terminated prematurely.


I noticed that my final submission actually failed, but I was expecting only a small score improvement (from 0.77 to 0.78), so it probably won’t make much of a difference.