-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Imporove apex #122
base: master
Are you sure you want to change the base?
Imporove apex #122
Conversation
Hi @ymd-h , thanks for brilliant PR!! I would really appreciate your continuous support! I checked changes of codes, and I think all of them would contribute improvement of ApeX performance.
Yeah, this is true. I also consider some workaround. Thanks! |
cpprb 10.5.2 imporoves performance of PrioritizedReplayBuffer and MPPrioritizedReplayBuffer.
I updated the PR.
On my local machine, (a part of) logs at It seems that Super Linter v3 is strangely broken and we need to upgrade to v4. (I will add soon.) |
This PR is for #117
MPPrioritizedReplayBuffer
(multi-process version PER), which doesn't requires manual lockMPPrioritizedReplayBuffer
doesn't lock whole buffer but critical section onlyget()
method.SyncManager
trained_steps
+=
requires manual lock. See doc)writer.flush()
(I just let TensorFlow flush. We might need to adjust flush timing for our needs)This improvement have larger effect for small network and/or simple
Env
.I tested by running example/run_apex_dqn.py with default "CartPole-v0" on CPU machine.
Please test other Envs and/or on GPU.
P.S.
Weights distribution with multiple queues seems to be inefficient because of multiple copying.
I will continue to consider other solution.