I was looking through the iGibson code and found some objects that spawn as interactive objects in the mode ‘interactive’. However I wish to know if the set of objects is fixed and in the evaluation time share some characterstics with the sample provided?
Additionally, in static scene it is clear that collision is not allowed. However for the interactive mode, challenge 2, does the episode terminate after a collision with a non-movable object? I can see that in the code it is given as same in all 3 challenges.
Collision with interactive objects are allowed, as we exempt the collision here: https://github.com/StanfordVL/iGibson/blob/gibson_sim2real/gibson2/envs/locomotor_env.py#L817-L821
So collision with interactive objects will not terminate an episode.
Okay thank you. Also could you confirm on the first part of the question. Would there by any other interactive objects apart from the 6 provided in gibson-challenge-data/assets/models/sample_urdfs ?
To answer your first question, the interactive objects used during the evaluation in Sim will be the same as the ones used during training (gibson-challenge-data/assets/models/sample_urdfs). In the Challenge in Real World phase, however, we will use interactive objects of the same categories, but not necessarily of the same instances. For example, your robot may encounter different types of shoes in the real world, not just the exact shoe models provided in “sample_urdfs”.
Please let me know if this answers your question.
@chengshu: Thank you for your response, yes the answer is very helpful.
In the interactive scenario, is the robot allowed to collide with non-interactive objects?
The task description suggests yes, but your answer here suggests no.
“PointNav scenario with interactive objects: in this scenario the agent is allowed (even encouraged) to collide and interact with the environment in order to push obstacles away.”
Thank you for your question.
No, the agent is not allowed to collide with non-interactive objects both in sim and in real because these will be furniture, walls or doors that are too heavy for the robot to push, which might potentially damage the robot.
On the flip slide, attempting to interact with these non-interactive objects will not help the robot to acheive its navigation goals. Therefore, the robot is encouraged to learn, from visual input, what objects are interactive versus non-interactive.
Let me know if this answers your question.
Makes sense, thanks for the answer.
In this case a related issue im experiencing is that the robot most often generates collisions when interacting with interactive objects (and thus the episode ends). This makes it close to impossible to interact with these objects.
As far as i can tell from the GUI the robot is not touching other objects, although it seem to wobble a bit which may cause parts other than wheels touching the ground. Is this the expected behaviour?
In our previous experiments we haven’t observed the behavior you mentioned. What is the object you are using and what is the velocity that the robot is traveling before hitting the object? We can look into this by reproducing your issue. If that is indeed a problem, we can potentially ignore the collision between the chassis and floor in evaluation.
I spent some time looking at the policy visualization (GUI) of the baseline policies that we trained with interactive objects. I observed that touching interactive objects alone will not trigger collisions or episode termination, and I didn’t observe too much wobbling that may cause the chassis to touch the floor (since the interactive objects are typically pretty light). Wobbling might cause the robot’s upper body to touch the nearby walls or furniture though. In all cases, the “interactive” scenario is a lot more challenging than the “static” scenario. So don’t worry too much if you see some performance drop.
Let me know if this answers your question. Thank you!