You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was studying carefully the code for the panda reach task and 2 questions came up to my mind:
The observation vector returned by the system contains the position of the end-effector of the robot. I wonder, whether it would work if the observation of the system consists of the joint angles of the robot instead of the position of the end-effector. Theoretically, the agent should be able to learn anyway. Or not?
The reward is calculated based on the distance between the target and the end-effector or, in sparse mode, it consists of only zeros and ones, when the distance < distance_threshold. But in case of sparse reward any DDPG, PPO, SAC agent will fail to learn. How do you train the agent using the sparse reward? Did you use the hindsight experience replay from SB3?
Thanks
The text was updated successfully, but these errors were encountered:
Yes! It is precisely the config of PandaReachJoints-v3 edit: my bad, in this environment you still get the ee position.
True again, the sparcity makes the task really hard to learn. For reach, it could work though, but for the other tasks you have very low chance to learn anything. That's why we use tricks like HER, indeed
Hello,
I was studying carefully the code for the panda reach task and 2 questions came up to my mind:
observation
vector returned by the system contains theposition
of the end-effector of the robot. I wonder, whether it would work if theobservation
of the system consists of thejoint angles
of the robot instead of the position of the end-effector. Theoretically, the agent should be able to learn anyway. Or not?reward
is calculated based on the distance between the target and the end-effector or, insparse
mode, it consists of only zeros and ones, when thedistance < distance_threshold
. But in case of sparse reward any DDPG, PPO, SAC agent will fail to learn. How do you train the agent using the sparse reward? Did you use the hindsight experience replay from SB3?Thanks
The text was updated successfully, but these errors were encountered: