-
Notifications
You must be signed in to change notification settings - Fork 375
[Feature] Adds per-head entropy coefficients to PPOLoss #2972
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Adds per-head entropy coefficients to PPOLoss #2972
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/2972
Note: Links to docs will display an error until the docs builds have been completed. ❌ 5 New Failures, 8 Pending, 4 Unrelated FailuresAs of commit 1aad5ab with merge base 31bd542 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Would be nice to see some tests for this new feature :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this!
I would:
- edit the docstrings
- add a test in
test/test_costs.py
Otherwise LGTM!
@vmoens Added some tests and more details to the docstrings. Tried to match the current style of the tests, so please let me know if I should rather chop them in smaller chunks. Also, is there a good E2E sanity check to make sure I didn't mess up the math? |
14cbbdc
to
1aad5ab
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took the liberty of editing _weighted_loss_entropy
such that NestedKey
s could be used (not only strings) in case you have ("group0", "agent0")
The logic is that the coeffs will be
coefs = TensorDict({
("group0", "agent0"): 0.1,
("group1", "agent0"): 0.15,
("group1", "agent1"): 2,
})
I also registered the coefs as buffers using TensorDictParams
- maybe it's an overkill?
Description
Adds per-head entropy coefficients to
PPOLoss
.entropy_coef
now acceptsMapping[str, float]
— one coefficient per action head._weighted_entropy
applies the correct weighting:-coef * entropy
-Σ coef_head * entropy_head
forward()
switches from direct multiplication to_weighted_entropy
.self.entropy_coef
buffer for backward-compat (dummy0.0
when mapping used).Motivation and Context
Composite / multi-head policies often need different exploration pressure on
each sub-action (e.g. steering vs. throttle).
This change lets users express that directly via a dict instead of patching the
loss externally.
Types of changes
Checklist