======== Explanation of Eye Generation Challenge ========
This task requires to generate a realistic eye-image, I, from a given semantic segmentation mask, M, of the same person, P. We have provided three JSON files in the shared data folder to map the eye-images to different subjects in the Train, Val and Test datasets. Please use the provided “image/mask to identity” map and generate realistic eye-images for a given segmentation mask of the same subject. For this task, you can use all the training image/mask and try to achieve the best performance on the given test set of semantic segmentation masks. The metric used for measuring the performance is L2 distance from the original eye-image.
For example- The file “OpenEDS_test_userID_mapping_to_images.json” contains user-id as the key and all the files (images/masks) from the segmentation dataset (in the key: “semantic_segmenation_images” from the folder Semantic_Segmentation_Dataset) and generative dataset (in the key: “generative_images” from Generative_Dataset) associated with the user-id .
In order to find the <UID> for a given test sample 000000336338.npy find the user-id by only using the name 000000336338 (without the extension) in the given OpenEDS_test_userID_mapping_to_images.json and that will also retrieve the list of images and masks for the same ID in test subest of Semantic_Segmentation_Dataset and Generative_Dataset, which you can use to generate realistic eye-image.
======== Notification of Paper Submission ========
This is a notification that the deadline of paper submission to ICCV workshop regarding OpenEDS challenges has changed.
Please note that there is a two tier submission, and the major difference with early deadline is that the accepted paper will appear in ICCV workshop proceedings. See table below for the two tiers.
|Paper submission deadline
|Paper acceptance notification deadline
|Camera Ready Deadline
||* ICCV workshop proceedings
||* Archived on IEEE Xplore
||* Archived on IEEE Xplore
||* Archived on CVF open access
||* Archived on CVF open access
======== Common information regarding paper submission ========
The challenge deadline remains the same, , which will close on Sept 15th.
Deadline to submit paper is stated in above table.
Specifically, (1) the first deadline is for folks who want to submit manuscript and are interested in seeing their paper (if accepted) appear in ICCV workshop proceedings; (2) the second deadline is for folks who are not ready to submit paper or do not necessarily care about the paper appearing in ICCV workshop proceedings.
For challenge participants they are welcome to submit papers to meet either of the two deadlines.
For winners, we expect the authors to submit paper, which are considered accepted, i.e the challenge winning papers will not undergo formal review if submitted after the challenge ends.
The reason for changes and steps ahead: The two deadlines are the result of changes in the ICCV publication requirements, which was informed to us recently. Specifically, the first deadline is for folks who want to submit manuscript and are interested in seeing their paper (if accepted) appear in ICCV workshop proceedings. The second deadline is for folks who are not ready to submit paper or do not necessarily care about the paper appearing in ICCV workshop proceedings. Note, for challenge participants they are welcome to submit papers to meet either of the two deadlines. For winners however, we expect the authors to submit paper, which are considered accepted, i.e the challenge winning papers will not undergo formal review if submitted after the challenge ends.
======== How to submit paper ========
- Please submit paper at: OpenEDS2019. This link is also provided on the official Workshop page under Submissions, https://research.fb.com/programs/the-2019-openeds-workshop-eye-tracking-for-vr-and-ar/
OpenEDS Challenge Team
The test json file provides the file names of the segmentation masks and eye images. However, I couldn’t find any corresponding file in all the folders.
Then I tried to search eye images using the the list in json[‘semantic_segmenation_images’], and search masks using the list in json[‘generative_images’] and all the files are found.
Can anyone check whether there is something wrong with the provided json?
Thanks for the question. We have updated the FYI, take a look and let us know if you still cannot find the required information.
Sorry for my late reply. I did exactly what you described and everything is OK for training and validation set.
However, I could not find any corresponding file for TEST SET. Then I tried to search eye IMAGES using the the list of json[‘semantic_segmenation_images’], and search MASKS using the list of json[‘generative_images’] and all the files now are found.
Can you have a further check on it? Thanks!
I wonder why there is no team on the leaderboard.
It’s so strange.
This task has never been addressed before in any literature before (to the best of our knowledge), therefore, it makes it a challenging problem to address. We cordially invite you to work on this problem and also spread it across your friends who might be interested.
For this task, input is a semantic segmentation mask and the output is an eye-image that you need to generate, therefore, I am not surprised if you didn’t find any corresponding images to the semantic masks. The idea is that looking at a few images of a subject one can learn about the general appearance of the subjects eye and then given a new semantic segmentation mask you can possibly transfer the appearance in accordance with the semantic mask.
(1) We had no idea about the changes in deadlines as the original deadline dates at research.fb.com/programs/openeds-challenge and on the EvalAI submission interface were left unchanged.
Is the challenge submission deadline affected by the new paper submission deadlines?
(2) In the official challenge rules, it is stated:
First place Winners will be required to write a paper describing their Model and its functionality to present at […]. Papers must be written in English and sent in PDF format. Each Paper must be no longer than four (4) pages, excluding references. Winners are responsible for complying with the ICCV submission guidelines for […]. The submissions may be published in ICCV Workshop proceedings.
Does this mean that everyone should submit a paper regardless of whether they may win the challenge, to address the case when they win the challenge in the end?
Does the challenge close after the 2nd paper submission deadline?
(3) A more direct question: is the paper submission stated above completely different to the challenges posed by OpenEDS? That is, should challenge participants not worry about the 2 deadlines above and aim for the 15th September deadline only?
Just to clarify, so you’re saying that we should not have corresponding segmentation and image pairs for the UIDs in the test set, right?
During test, what you’d have is the segmentation mask from that specific user, and the task is to generate the images given the segmentation task.
The main idea behind this challenge is to promote advancement from generating perceptually “real” looking images towards quantitatively realistic image generation with explicit geometric constraints manifesting due to the physical nature of the image generation process in the real-world. Moreover, there is an inherent challenge of leveraging a few images from a person’s eye to learn their eye’s appearance and generate novel images from a semantic segmentation masks of the same person. Therefore, there doesn’t exist any known work to the organizing committee that can tackle all these challenges which makes this challenge extremely exciting and potentially leading to breakthrough technologies invention. As a side effect, it takes longer time to develop such algorithms. Below, we are providing a couple of research directions for motivation to the participants of this challenge -
- Geometric Image Synthesis - http://www.cvlibs.net/publications/Alhaija2018ACCV.pdf
- Few-Shot Unsupervised Image-to-Image Translation - https://nvlabs.github.io/FUNIT/
- Segmentation Guided Image-to-Image Translation
with Adversarial Networks - https://arxiv.org/pdf/1901.01569.pdf
The above are just a few pointers that contain bits and pieces of the research require to actually solve this problem, but nevertheless a good starting point.