It seems like agent.reset() is not called after an episode is completed in challenge.py. Is it possible to include this? I have some variables I’d like to reset after every episode.
Also, is it possible to have access to the agent’s current position within agent.py?
@jtruong Thank you for your question.
We have just added agent.reset() at the beginning of each episode in this commit: https://github.com/StanfordVL/iGibson/commit/c96cef96a5df5aec6fc0c6f62bc1c32395361d5f
For your second question, the short answer is no. We only provide a four dimensional vector as part of the observation, in addition to RGBD. This vector includes goal 2D location in agent’s frame, current linear (m/s) and angular velocity (rad/s). These should be enough information for the agent to navigate to the goal. You can find more details here: https://github.com/StanfordVL/iGibson/blob/02a607a07a89e7d55ac1f7626b2f5fa3bdbcd119/gibson2/envs/locomotor_env.py#L260-L281
That being said, you can potentially get the agent’s current position in the odometry frame by integrating the linear and angular velocities.
Thank you for adding in the line of code!
Just to clarify, should I copy over challenge.py with the change into my docker, or should I expect the base image to be updated? (https://hub.docker.com/r/gibsonchallenge/gibsonv2)