-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torch.Size does not match expected state_dim #162
Comments
Hi, However, you have not shared all the changes you have made and without that I cannot help you here as it is not possible for me to guess how you have implemented sampling. Good luck |
So is the size mismatch not an issue anymore? In any case, I do not see where this memory class is called and the full implementation pipeline of the buffer. I would double check that the state that is returned from buffer is the same as the input into it. Rotation on the spot either happens early in the training process or when the network is not converging. For that you can see any issues tagged with the 'convergence' label. If your state used for training is wrong or the sampling is ineffective, the network would not learn much so double check the implementation for bugs. |
Describe the issue
I am going to improve the replay_buffer in your source code and add the PrioritizedReplayBuffer model. I am currently facing a dimension mismatch problem. The torch.Size dimension printed each time I run the code is inconsistent. I originally analyzed that it was caused by the random start and target points. The following figure 1 is the result of my printing, and the other pictures are some of my modifications to the source code. Please give me some suggestions, thank you very much.
Screenshots
图一
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: