Submission evaluation issue

I have a docker container that outputs a submission to the correct folder locally. When running with the evaluation.py script nodes are also detected correctly.

However, when submitting to EvalAI, the container runs succesfully but zero nodes are detected at all (neither false positives nor true positives).

Has anyone else encountered this issue? I assume it is something wrong with my submission script but I cannot seem to figure it out.

Hi,

Do you get one score on your leaderboard, or you get a “failed” status in the submission?

I get a score, but it is just zeroes. The submission also registers as “Finished” for what it’s worth.

2024-02-15_17-13_1

When you try it locally and it works, what values you set for the paths of the test set and the prediction file?

I am using “/dataset/test/” for the test set directory and writing the output to “/submission/submission.csv”

It appears like the submission is able to read the test dataset and make predictions on the evalai server
from what I can see

@victorrf

just bumping this issue. i am still not able to get scoring on my submissions.

by dumping to stderr, i can confirm that the dataframe generated from running on the competition set on the evalai server looks correct. (see attached image)

thank you for the help
2024-02-20_11-30

Hi,

I just tried resubmitting a baseline approach when you raised this issue and everything was ok on my end.

What do those object Ids in the image correspond to?

Hi @roryestradae79, it looks like you may be using the incorrect ObjectID for your submission. The ObjectID should correspond to the integer value of the filename (you can refer to the code snippet data_df['ObjectID'] = int(data_file.stem) ). Since the evaluation script couldn’t find a match between the ObjectID in your submission and the test labels, it assumes that you didn’t make any predictions for those data points.