When I visualize the map created using the depth observations in simulation from the iGibson Docker image and also the sim2real branch of iGibson setup, I see that the maps are most accurate if I take camera angle as zero instead of -20 as mentioned in Parameters.md. Is there something I am missing? Could you confirm if the camera angle in Simulation is zero or -20 degrees? Also I see the observation sizes are 160x90 in both sim and real compared to 320x180 in the Parameters file.
Thank you for your support throughout this challenge.
We updated the challenge_data.tar.gz file in March (md5sum of gibson-challenge-data.tar.gz =
7145b1e4b5dec9531f355f91153de194), where we changed the camera angle to -20 degree. You can confirm by running
cat gibson-challenge-data/assets/models/locobot/locobot.urdf | grep 0.349
where you can see
<origin rpy="0 0.3490658503988659 0" xyz="0 0 0.05"/>, which corresponds to the yaw angle of camera link. Can you make sure you are using the right data? Maybe you were using an old version?
Currently our baselines and most participants are using 160x90 resolution. But we can definitely accomodate 320x180 resolution (at most 320x180, cannot go higher), just let us know and we will put a specific flag when running your submission.
Thank you for the help in understanding the discrepancy. And I now understand I should have updated the Assets data with the latest information. The submission in performing better in Dev phase now. I will get back to you on the resolution in a day or two.