Skip to content

[Feature] Adds per-head entropy coefficients to PPOLoss #2972

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jun 2, 2025

Conversation

felixsittenauer
Copy link
Contributor

@felixsittenauer felixsittenauer commented May 22, 2025

Description

Adds per-head entropy coefficients to PPOLoss.

  • entropy_coef now accepts
    • scalar (legacy behaviour) or
    • Mapping[str, float] — one coefficient per action head.
  • New helper _weighted_entropy applies the correct weighting:
    • scalar path: -coef * entropy
    • mapping path: -Σ coef_head * entropy_head
  • forward() switches from direct multiplication to _weighted_entropy.
  • Keeps self.entropy_coef buffer for backward-compat (dummy 0.0 when mapping used).
  • Docstrings updated.

Motivation and Context

Composite / multi-head policies often need different exploration pressure on
each sub-action (e.g. steering vs. throttle).
This change lets users express that directly via a dict instead of patching the
loss externally.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Example (update in the folder of examples)

Checklist

  • I have read the CONTRIBUTION guide (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.

Copy link

pytorch-bot bot commented May 22, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/2972

Note: Links to docs will display an error until the docs builds have been completed.

❌ 5 New Failures, 8 Pending, 4 Unrelated Failures

As of commit 1aad5ab with merge base 31bd542 (image):

NEW FAILURES - The following jobs have failed:

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 22, 2025
@louisfaury
Copy link
Contributor

Would be nice to see some tests for this new feature :)

@vmoens vmoens changed the title Adds per-head entropy coefficients to PPOLoss [Feature] Adds per-head entropy coefficients to PPOLoss May 28, 2025
Copy link
Collaborator

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this!
I would:

  • edit the docstrings
  • add a test in test/test_costs.py

Otherwise LGTM!

@felixsittenauer
Copy link
Contributor Author

@vmoens Added some tests and more details to the docstrings. Tried to match the current style of the tests, so please let me know if I should rather chop them in smaller chunks.

Also, is there a good E2E sanity check to make sure I didn't mess up the math?

@vmoens vmoens force-pushed the fs/factor-wise-entropy-penalty branch from 14cbbdc to 1aad5ab Compare June 2, 2025 09:37
Copy link
Collaborator

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took the liberty of editing _weighted_loss_entropy such that NestedKeys could be used (not only strings) in case you have ("group0", "agent0")

The logic is that the coeffs will be

coefs = TensorDict({
   ("group0", "agent0"): 0.1,
   ("group1", "agent0"): 0.15,
   ("group1", "agent1"): 2,
})

I also registered the coefs as buffers using TensorDictParams - maybe it's an overkill?

@vmoens vmoens added the enhancement New feature or request label Jun 2, 2025
@vmoens vmoens merged commit 00657f0 into pytorch:main Jun 2, 2025
88 of 103 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants