We are fixing the submission server

Dear participants of MIT-ARClab Prize for AI Innovation in Space 2024:

Thank you for your participation in the challenge! As you may have noticed already, submissions are failing silently with empty stdout and stderr files in some cases. We are working together with the EvalAI team to solve this issue as soon as possible, and will let you know once the submission server is operating normally again.

Best regards,

Dear participants,

We have some news to share about this issue. While EvalAI is still working on fixing it, the team has shared with us a trick that you can use to bypass it.

The trick simply consists in adding a sleep statement at the end of your submission file, just after you store the file submission.csv with your predictions. The time to sleep has to be greater than 5 minutes (300 seconds). We have updated the example submission files of the SPLID devkit with this trick. As an example, if you go to the end of submission file of the ML baseline, you’ll see the statement:


Note: You have to import Python’s time library.

You can now see the baseline submissions finished and scored in the challenge leaderboard.

Please use this thread to ask any question about this.

Best regards,

Hi Victor,

Could you provide me with some information about the metric for this challenge?
As I understand correctly, this challenge requires a solution to output the highest F2 score, however, as the leaderboard is showing, the lower the better and the higher the rank.

You’re right, there was a bug in the challenge configuration. It should be ok now.

thank you!


I have a submission with higher performance on my list of submission, however, when I click on the check box to make it public, my score does not change on the leaderboard.

I think it is possible for multiple submissions per person to be listed on the leaderboard - so maybe your score from the “old” submission doesn’t change, but a new one shows up?

Maybe changing the name of the submission? We have two submissions in the leaderboard (baseline-heuristic and baseline-ml), both from the host team.


Hello Victor!
We are still getting this issue, even with the time.sleep(360) workaround. We are able to run the container locally without any errors. Any idea on what is the issue? We are clueless since we can’t see the stderr or stdout.
Thank you!

Hi Andre,

I just submitted and it ran successfully. Do you get a “Failed” status in the submissions yet?

Hi Victor, thank you for the reply.
The issue was that the docker image was not being built with the correct architecture. By explicitly setting the platform (docker build --platform="linux/amd64" ...) we were able to make a successful submission.

1 Like