Runtime of Submission Varies Dramatically


I submitted to the ObjectNav Challenge a week ago, and my method ran successfully in a reasonable amount of time. I retrained some methods, and updated my docker image with the new models. The docker image is identical except for loading different pretrained models (the models themselves did not change in terms of size, architecture, etc.)

Now, the time it takes for my EvalAI submissions to run has dramatically increased (from 1161.029869 sec to 119902.928339 sec on the minival phase). I ran local profiling code last week before my faster running submission. My current submission takes the same amount of time to run locally for me when running the same profiling code.

I am resubmitting my old models to see if I can reproduce the faster execution, but I cannot think of any reason changes to my submission could be causing this change in execution time since it does not occur locally for me.

Is there any reason from the EvalAI side that this difference in execution time could occur on submission?

Also, is someone able to kill my job that has been running for 2.5 days on the Test-Standard phase?

Thank you for any help you can provide!

To give some additional information, I resubmitted the same image which ran in 1161.029869 sec yesterday (so the old models) and it ran in 3410.029608 sec on the minival phase which is also a pretty large time variation.

Hi @bucherb, the evaluation time reported also includes wait time for a submission. Last week due to some scaling issues there was a large delay in submission runs that would’ve increased the execution time for your submissions. We are working on fixing this issue. We will update you once we have released the fix.

1 Like