You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to train a RL object navigation policy following the instructions. The only thing I changed is the camera configurations (hfov, height etc) so that it matches with the camera on our robot. Note: I didn't change the reward functions, learning rate, nn architectures etc.
I train the policy on 5 GPU (each running 18 envs).
However, the agent seems not able to learn anything: it only learns to take the STOP action in order to avoid collision penalties. Please check the attach screenshot of tensorboard for details.
I tried several times (training from scratch) but none of the trails succeeded.
I'm wondering if there's any tricks the homerobot team used to make it work?
Any help here is appreciated! @yvsriram@cpaxton
The text was updated successfully, but these errors were encountered:
Hey, we actually add the collision penalties and segmentation noise in second stage of training. You can find the first stage configs here: facebookresearch/habitat-lab@8037741
I tried to train a RL object navigation policy following the instructions. The only thing I changed is the camera configurations (hfov, height etc) so that it matches with the camera on our robot. Note: I didn't change the reward functions, learning rate, nn architectures etc.
I train the policy on 5 GPU (each running 18 envs).
However, the agent seems not able to learn anything: it only learns to take the STOP action in order to avoid collision penalties. Please check the attach screenshot of tensorboard for details.
I tried several times (training from scratch) but none of the trails succeeded.
I'm wondering if there's any tricks the homerobot team used to make it work?
Any help here is appreciated! @yvsriram @cpaxton
The text was updated successfully, but these errors were encountered: