Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for automatic mixed precision #31

Open
wants to merge 18 commits into
base: master
Choose a base branch
from

Conversation

bpopeters
Copy link
Collaborator

This pull request adds decorators to EntmaxBisectFunction, SparsemaxBisectFunction, Entmax15Function, and SparsemaxFunction so that they will autocast to 32-precision if they occur inside an autocast context. This seems to be the right thing to do because bf16 and fp16 introduce numerical stability issues that cannot easily be solved. Upcasting should make it easy to incorporate entmax into any mixed-precision setting.

The actual code that needed to be written was incredibly simple; the vast majority of commits are me making silly mistakes while writing the tests.

@bpopeters
Copy link
Collaborator Author

The tests pass, but they shouldn't -- torch.amp was only added in 1.10.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant