About the Ego4D Long term action anticipation challenge category

(Replace this first paragraph with a brief description of your new category. This guidance will appear in the category selection area, so try to keep it below 200 characters.)

Use the following paragraphs for a longer description, or to establish category guidelines or rules:

  • Why should people use this category? What is it for?

  • How exactly is this different than the other categories we already have?

  • What should topics in this category generally contain?

  • Do we need this category? Can we merge with another category, or subcategory?

Hi, I use the given script(tools/long_term_anticipation/evaluate_forecasting) to create a submission. However, it returns the following error:

1 Like

Hi Yifeng, there was an issue with the evaluation script version that should be fixed now. Please try again and let me know if it’s still a problem.

Hi,
After the update, I am still getting an error different from the previous one -

Hi, it looks like your uploaded file doesn’t have a prediction for a specific instance in the test set (clip_uid: 02d5e4b5-71a8-4469-b5ed-ae22055443ed, action_idx: 25). The uploaded file needs to contain one prediction per instance in the test set. From the submission guidelines:

The testset contains a list of actions for each clip_uid, starting from action_idx=0. The first prediction the model has to make is for the action_idx=8, i.e. after watching 8 actions in the clip, predict the next Z. The last prediction the model has to make is for action_idx = total_actions_in_clip - Z, i.e. there must be 20 actions to predict at the end of the clip. Every action_idx in between these two values must have a prediction entry in the submission json.

The evaluation script has been updated to include an error for any instances missed. The stdout file should look like this:

ERR: predictions for some test instances are missing.
Please generate a submission that includes the following keys:
{'02d5e4b5-71a8-4469-b5ed-ae22055443ed_25'}

The evaluation will succeed if dummy predictions are provided (e.g., all zeros for verbs/nouns). Please let us know if it is not possible to generate predictions for some of these keys (e.g., if videos are missing) and we will update the script to handle them.