In the evaluation script (create_json_ss.py), the argument description is written as num-model-param (Number of model parameters for submission). Thus, I submitted using the actual number of parameters of my model. I thought the submission server calculates it, but it is not true.
EDIT: I figured out that model complexity must be the number of parameters as it is written in the evaluation script.
Yes, its the number of actual model parameters. We do not check it at the time of submission, but we will evaluate it for the winning entries and if found wrong, it will create a ground for dis-qualification.
How are preprocessing or post-processing steps evaluated in this competition in terms of model complexity?
Hi,
Pre-processing and post-processing steps don’t have parameters (I am assuming that you are talking about image-processing steps), therefore, you need not bother about them. However, if you are using some kind of neural stack for either pre-processing or post-processing please add those parameters in the number of parameters for the computation.
Hi, I noticed that the evaluation metric calculated on evalai is different from metric stated on the word document from the facebook challenge page. On evalai, S= number of learned model parameters x 4 (floating point) / (10241024). But from the facebook page, S= number of
learned model parameters x 32 (floating point) / (10241024). Can you clarify this?
Also, in the document, it states that the challenge ends on August 10th but in other places it’s September 15th. Can this also be clarified?
Thanks
Hi, please find the FYI for the related questions.