-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide the most important <cmath>
functions for __half/bfloat16
#3410
Comments
We got them recently, they are here https://github.com/NVIDIA/cccl/blob/main/libcudacxx/include/cuda/std/__cmath/traits.h |
I don't see any documentation related to |
We should add that |
Where is the functionality for FP8 types? I don't find it. |
I believe we wanted to split this up, because there is currently no support for FP8 in libcu++ whereas support for the 16 bit types is effectively done |
<cmath>
functions for __half/bfloat16/fp8
<cmath>
functions for __half/bfloat16
ok, I fixed the issue title then. |
FP8 in CUDA is more a placeholder for exploiting tensor cores. It is not a really general type as bfloat or half. I don't think we need to support common math operation over FP8. |
We have ugly hacks in our unit tests that specialize AFAIK, we don't need |
Is this a duplicate?
Area
libcu++
Is your feature request related to a problem? Please describe.
<cmath>
is widely used to interact with floating-point types, but libcu++ doesn't provide support for__half/bfloat16/fp8
Describe the solution you'd like
Provide at least the following functions:
isfinite
isinf
isnan
isnormal
signbit
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: