Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specialize __is_extended_floating_point for FP8 types #3470

Merged
merged 4 commits into from
Jan 22, 2025

Conversation

bernhardmgruber
Copy link
Contributor

No description provided.

@bernhardmgruber bernhardmgruber requested a review from a team as a code owner January 22, 2025 08:45
struct __is_extended_floating_point<__nv_fp8_e5m2> : true_type
{};

# ifndef _CCCL_NO_INLINE_VARIABLES
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left those checks in case I need to backport this changeset.

@bernhardmgruber bernhardmgruber changed the title Specialize __is_extended_floating_point_v for FP8 types Specialize __is_extended_floating_point for FP8 types Jan 22, 2025
@miscco miscco mentioned this pull request Jan 22, 2025
2 tasks
Copy link
Contributor

🟩 CI finished in 1h 53m: Pass: 100%/135 | Total: 2d 15h | Avg: 28m 13s | Max: 1h 28m | Hits: 363%/23408
  • 🟩 cub: Pass: 100%/38 | Total: 1d 02h | Avg: 42m 01s | Max: 59m 13s | Hits: 321%/3540

    🟩 cpu
      🟩 amd64              Pass: 100%/36  | Total:  1d 00h | Avg: 41m 32s | Max: 59m 13s | Hits: 321%/3540  
      🟩 arm64              Pass: 100%/2   | Total:  1h 41m | Avg: 50m 54s | Max: 51m 01s
    🟩 ctk
      🟩 12.0               Pass: 100%/5   | Total:  3h 40m | Avg: 44m 11s | Max: 49m 38s | Hits: 321%/885   
      🟩 12.5               Pass: 100%/2   | Total:  1h 56m | Avg: 58m 01s | Max: 58m 18s
      🟩 12.6               Pass: 100%/31  | Total: 21h 00m | Avg: 40m 39s | Max: 59m 13s | Hits: 320%/2655  
    🟩 cudacxx
      🟩 ClangCUDA18        Pass: 100%/2   | Total:  1h 54m | Avg: 57m 11s | Max: 57m 15s
      🟩 nvcc12.0           Pass: 100%/5   | Total:  3h 40m | Avg: 44m 11s | Max: 49m 38s | Hits: 321%/885   
      🟩 nvcc12.5           Pass: 100%/2   | Total:  1h 56m | Avg: 58m 01s | Max: 58m 18s
      🟩 nvcc12.6           Pass: 100%/29  | Total: 19h 05m | Avg: 39m 30s | Max: 59m 13s | Hits: 320%/2655  
    🟩 cudacxx_family
      🟩 ClangCUDA          Pass: 100%/2   | Total:  1h 54m | Avg: 57m 11s | Max: 57m 15s
      🟩 nvcc               Pass: 100%/36  | Total:  1d 00h | Avg: 41m 11s | Max: 59m 13s | Hits: 321%/3540  
    🟩 cxx
      🟩 Clang14            Pass: 100%/4   | Total:  3h 03m | Avg: 45m 55s | Max: 48m 11s
      🟩 Clang15            Pass: 100%/1   | Total: 44m 43s | Avg: 44m 43s | Max: 44m 43s
      🟩 Clang16            Pass: 100%/1   | Total: 48m 43s | Avg: 48m 43s | Max: 48m 43s
      🟩 Clang17            Pass: 100%/1   | Total: 51m 10s | Avg: 51m 10s | Max: 51m 10s
      🟩 Clang18            Pass: 100%/7   | Total:  4h 58m | Avg: 42m 36s | Max: 57m 15s
      🟩 GCC7               Pass: 100%/2   | Total:  1h 46m | Avg: 53m 19s | Max: 58m 28s
      🟩 GCC8               Pass: 100%/1   | Total: 46m 36s | Avg: 46m 36s | Max: 46m 36s
      🟩 GCC9               Pass: 100%/2   | Total:  1h 48m | Avg: 54m 25s | Max: 59m 13s
      🟩 GCC10              Pass: 100%/1   | Total: 50m 47s | Avg: 50m 47s | Max: 50m 47s
      🟩 GCC11              Pass: 100%/1   | Total: 46m 27s | Avg: 46m 27s | Max: 46m 27s
      🟩 GCC12              Pass: 100%/3   | Total:  1h 31m | Avg: 30m 32s | Max: 49m 55s
      🟩 GCC13              Pass: 100%/8   | Total:  4h 03m | Avg: 30m 26s | Max: 51m 01s
      🟩 MSVC14.29          Pass: 100%/2   | Total:  1h 09m | Avg: 34m 41s | Max: 40m 41s | Hits: 321%/1770  
      🟩 MSVC14.39          Pass: 100%/2   | Total:  1h 30m | Avg: 45m 24s | Max: 48m 56s | Hits: 320%/1770  
      🟩 NVHPC24.7          Pass: 100%/2   | Total:  1h 56m | Avg: 58m 01s | Max: 58m 18s
    🟩 cxx_family
      🟩 Clang              Pass: 100%/14  | Total: 10h 26m | Avg: 44m 45s | Max: 57m 15s
      🟩 GCC                Pass: 100%/18  | Total: 11h 34m | Avg: 38m 34s | Max: 59m 13s
      🟩 MSVC               Pass: 100%/4   | Total:  2h 40m | Avg: 40m 02s | Max: 48m 56s | Hits: 321%/3540  
      🟩 NVHPC              Pass: 100%/2   | Total:  1h 56m | Avg: 58m 01s | Max: 58m 18s
    🟩 gpu
      🟩 h100               Pass: 100%/2   | Total: 41m 41s | Avg: 20m 50s | Max: 22m 04s
      🟩 v100               Pass: 100%/36  | Total:  1d 01h | Avg: 43m 12s | Max: 59m 13s | Hits: 321%/3540  
    🟩 jobs
      🟩 Build              Pass: 100%/31  | Total:  1d 00h | Avg: 47m 00s | Max: 59m 13s | Hits: 321%/3540  
      🟩 DeviceLaunch       Pass: 100%/1   | Total: 21m 22s | Avg: 21m 22s | Max: 21m 22s
      🟩 GraphCapture       Pass: 100%/1   | Total: 17m 19s | Avg: 17m 19s | Max: 17m 19s
      🟩 HostLaunch         Pass: 100%/3   | Total: 57m 49s | Avg: 19m 16s | Max: 19m 39s
      🟩 TestGPU            Pass: 100%/2   | Total: 43m 28s | Avg: 21m 44s | Max: 21m 45s
    🟩 sm
      🟩 90                 Pass: 100%/2   | Total: 41m 41s | Avg: 20m 50s | Max: 22m 04s
      🟩 90a                Pass: 100%/1   | Total: 18m 48s | Avg: 18m 48s | Max: 18m 48s
    🟩 std
      🟩 17                 Pass: 100%/14  | Total: 11h 15m | Avg: 48m 12s | Max: 59m 13s | Hits: 321%/2655  
      🟩 20                 Pass: 100%/24  | Total: 15h 22m | Avg: 38m 25s | Max: 58m 18s | Hits: 321%/885   
    
  • 🟩 libcudacxx: Pass: 100%/37 | Total: 14h 58m | Avg: 24m 17s | Max: 1h 04m | Hits: 411%/10166

    🟩 cpu
      🟩 amd64              Pass: 100%/35  | Total: 14h 17m | Avg: 24m 29s | Max:  1h 04m | Hits: 411%/10166 
      🟩 arm64              Pass: 100%/2   | Total: 41m 23s | Avg: 20m 41s | Max: 20m 56s
    🟩 ctk
      🟩 12.0               Pass: 100%/5   | Total:  1h 50m | Avg: 22m 00s | Max: 36m 08s | Hits: 412%/2496  
      🟩 12.5               Pass: 100%/2   | Total:  1h 02m | Avg: 31m 19s | Max: 31m 34s
      🟩 12.6               Pass: 100%/30  | Total: 12h 06m | Avg: 24m 12s | Max:  1h 04m | Hits: 410%/7670  
    🟩 cudacxx
      🟩 ClangCUDA18        Pass: 100%/4   | Total:  1h 04m | Avg: 16m 12s | Max: 22m 19s
      🟩 nvcc12.0           Pass: 100%/5   | Total:  1h 50m | Avg: 22m 00s | Max: 36m 08s | Hits: 412%/2496  
      🟩 nvcc12.5           Pass: 100%/2   | Total:  1h 02m | Avg: 31m 19s | Max: 31m 34s
      🟩 nvcc12.6           Pass: 100%/26  | Total: 11h 01m | Avg: 25m 25s | Max:  1h 04m | Hits: 410%/7670  
    🟩 cudacxx_family
      🟩 ClangCUDA          Pass: 100%/4   | Total:  1h 04m | Avg: 16m 12s | Max: 22m 19s
      🟩 nvcc               Pass: 100%/33  | Total: 13h 53m | Avg: 25m 16s | Max:  1h 04m | Hits: 411%/10166 
    🟩 cxx
      🟩 Clang14            Pass: 100%/4   | Total:  1h 25m | Avg: 21m 18s | Max: 26m 46s
      🟩 Clang15            Pass: 100%/1   | Total: 23m 03s | Avg: 23m 03s | Max: 23m 03s
      🟩 Clang16            Pass: 100%/1   | Total: 21m 52s | Avg: 21m 52s | Max: 21m 52s
      🟩 Clang17            Pass: 100%/1   | Total: 24m 46s | Avg: 24m 46s | Max: 24m 46s
      🟩 Clang18            Pass: 100%/8   | Total:  2h 30m | Avg: 18m 48s | Max: 23m 46s
      🟩 GCC7               Pass: 100%/2   | Total: 38m 05s | Avg: 19m 02s | Max: 20m 07s
      🟩 GCC8               Pass: 100%/1   | Total: 21m 50s | Avg: 21m 50s | Max: 21m 50s
      🟩 GCC9               Pass: 100%/2   | Total: 41m 42s | Avg: 20m 51s | Max: 21m 14s
      🟩 GCC10              Pass: 100%/1   | Total: 23m 02s | Avg: 23m 02s | Max: 23m 02s
      🟩 GCC11              Pass: 100%/1   | Total: 21m 49s | Avg: 21m 49s | Max: 21m 49s
      🟩 GCC12              Pass: 100%/1   | Total: 21m 39s | Avg: 21m 39s | Max: 21m 39s
      🟩 GCC13              Pass: 100%/8   | Total:  3h 14m | Avg: 24m 20s | Max:  1h 04m
      🟩 MSVC14.29          Pass: 100%/2   | Total:  1h 16m | Avg: 38m 21s | Max: 40m 34s | Hits: 412%/5002  
      🟩 MSVC14.39          Pass: 100%/2   | Total:  1h 31m | Avg: 45m 35s | Max: 54m 16s | Hits: 409%/5164  
      🟩 NVHPC24.7          Pass: 100%/2   | Total:  1h 02m | Avg: 31m 19s | Max: 31m 34s
    🟩 cxx_family
      🟩 Clang              Pass: 100%/15  | Total:  5h 05m | Avg: 20m 21s | Max: 26m 46s
      🟩 GCC                Pass: 100%/16  | Total:  6h 02m | Avg: 22m 40s | Max:  1h 04m
      🟩 MSVC               Pass: 100%/4   | Total:  2h 47m | Avg: 41m 58s | Max: 54m 16s | Hits: 411%/10166 
      🟩 NVHPC              Pass: 100%/2   | Total:  1h 02m | Avg: 31m 19s | Max: 31m 34s
    🟩 gpu
      🟩 v100               Pass: 100%/37  | Total: 14h 58m | Avg: 24m 17s | Max:  1h 04m | Hits: 411%/10166 
    🟩 jobs
      🟩 Build              Pass: 100%/32  | Total: 12h 48m | Avg: 24m 01s | Max: 54m 16s | Hits: 411%/10166 
      🟩 NVRTC              Pass: 100%/2   | Total: 44m 08s | Avg: 22m 04s | Max: 22m 45s
      🟩 Test               Pass: 100%/2   | Total:  1h 23m | Avg: 41m 58s | Max:  1h 04m
      🟩 VerifyCodegen      Pass: 100%/1   | Total:  2m 03s | Avg:  2m 03s | Max:  2m 03s
    🟩 sm
      🟩 90                 Pass: 100%/1   | Total: 12m 38s | Avg: 12m 38s | Max: 12m 38s
      🟩 90a                Pass: 100%/2   | Total: 29m 07s | Avg: 14m 33s | Max: 16m 53s
    🟩 std
      🟩 17                 Pass: 100%/15  | Total:  6h 12m | Avg: 24m 51s | Max: 40m 34s | Hits: 412%/7508  
      🟩 20                 Pass: 100%/21  | Total:  8h 43m | Avg: 24m 56s | Max:  1h 04m | Hits: 407%/2658  
    
  • 🟩 thrust: Pass: 100%/37 | Total: 19h 14m | Avg: 31m 12s | Max: 1h 28m | Hits: 325%/9180

    🟩 cmake_options
      🟩 -DTHRUST_DISPATCH_TYPE=Force32bit Pass: 100%/2   | Total: 37m 27s | Avg: 18m 43s | Max: 25m 34s
    🟩 cpu
      🟩 amd64              Pass: 100%/35  | Total: 18h 17m | Avg: 31m 21s | Max:  1h 28m | Hits: 325%/9180  
      🟩 arm64              Pass: 100%/2   | Total: 56m 41s | Avg: 28m 20s | Max: 30m 16s
    🟩 ctk
      🟩 12.0               Pass: 100%/5   | Total:  2h 47m | Avg: 33m 35s | Max: 39m 13s | Hits: 317%/1836  
      🟩 12.5               Pass: 100%/2   | Total:  2h 35m | Avg:  1h 17m | Max:  1h 28m
      🟩 12.6               Pass: 100%/30  | Total: 13h 50m | Avg: 27m 41s | Max: 50m 47s | Hits: 327%/7344  
    🟩 cudacxx
      🟩 ClangCUDA18        Pass: 100%/2   | Total: 49m 51s | Avg: 24m 55s | Max: 25m 18s
      🟩 nvcc12.0           Pass: 100%/5   | Total:  2h 47m | Avg: 33m 35s | Max: 39m 13s | Hits: 317%/1836  
      🟩 nvcc12.5           Pass: 100%/2   | Total:  2h 35m | Avg:  1h 17m | Max:  1h 28m
      🟩 nvcc12.6           Pass: 100%/28  | Total: 13h 00m | Avg: 27m 53s | Max: 50m 47s | Hits: 327%/7344  
    🟩 cudacxx_family
      🟩 ClangCUDA          Pass: 100%/2   | Total: 49m 51s | Avg: 24m 55s | Max: 25m 18s
      🟩 nvcc               Pass: 100%/35  | Total: 18h 24m | Avg: 31m 33s | Max:  1h 28m | Hits: 325%/9180  
    🟩 cxx
      🟩 Clang14            Pass: 100%/4   | Total:  2h 03m | Avg: 30m 49s | Max: 32m 21s
      🟩 Clang15            Pass: 100%/1   | Total: 28m 40s | Avg: 28m 40s | Max: 28m 40s
      🟩 Clang16            Pass: 100%/1   | Total: 28m 51s | Avg: 28m 51s | Max: 28m 51s
      🟩 Clang17            Pass: 100%/1   | Total: 28m 41s | Avg: 28m 41s | Max: 28m 41s
      🟩 Clang18            Pass: 100%/7   | Total:  2h 47m | Avg: 23m 53s | Max: 34m 29s
      🟩 GCC7               Pass: 100%/2   | Total:  1h 03m | Avg: 31m 46s | Max: 32m 36s
      🟩 GCC8               Pass: 100%/1   | Total: 30m 21s | Avg: 30m 21s | Max: 30m 21s
      🟩 GCC9               Pass: 100%/2   | Total:  1h 11m | Avg: 35m 32s | Max: 39m 13s
      🟩 GCC10              Pass: 100%/1   | Total: 30m 28s | Avg: 30m 28s | Max: 30m 28s
      🟩 GCC11              Pass: 100%/1   | Total: 31m 49s | Avg: 31m 49s | Max: 31m 49s
      🟩 GCC12              Pass: 100%/1   | Total: 31m 57s | Avg: 31m 57s | Max: 31m 57s
      🟩 GCC13              Pass: 100%/8   | Total:  2h 50m | Avg: 21m 15s | Max: 35m 06s
      🟩 MSVC14.29          Pass: 100%/2   | Total:  1h 07m | Avg: 33m 36s | Max: 33m 37s | Hits: 332%/3672  
      🟩 MSVC14.39          Pass: 100%/3   | Total:  2h 05m | Avg: 41m 52s | Max: 50m 47s | Hits: 321%/5508  
      🟩 NVHPC24.7          Pass: 100%/2   | Total:  2h 35m | Avg:  1h 17m | Max:  1h 28m
    🟩 cxx_family
      🟩 Clang              Pass: 100%/14  | Total:  6h 16m | Avg: 26m 54s | Max: 34m 29s
      🟩 GCC                Pass: 100%/16  | Total:  7h 09m | Avg: 26m 49s | Max: 39m 13s
      🟩 MSVC               Pass: 100%/5   | Total:  3h 12m | Avg: 38m 33s | Max: 50m 47s | Hits: 325%/9180  
      🟩 NVHPC              Pass: 100%/2   | Total:  2h 35m | Avg:  1h 17m | Max:  1h 28m
    🟩 gpu
      🟩 v100               Pass: 100%/37  | Total: 19h 14m | Avg: 31m 12s | Max:  1h 28m | Hits: 325%/9180  
    🟩 jobs
      🟩 Build              Pass: 100%/31  | Total: 17h 36m | Avg: 34m 05s | Max:  1h 28m | Hits: 315%/7344  
      🟩 TestCPU            Pass: 100%/3   | Total: 54m 34s | Avg: 18m 11s | Max: 39m 09s | Hits: 365%/1836  
      🟩 TestGPU            Pass: 100%/3   | Total: 42m 57s | Avg: 14m 19s | Max: 18m 02s
    🟩 sm
      🟩 90a                Pass: 100%/1   | Total: 17m 16s | Avg: 17m 16s | Max: 17m 16s
    🟩 std
      🟩 17                 Pass: 100%/14  | Total:  8h 05m | Avg: 34m 42s | Max:  1h 06m | Hits: 327%/5508  
      🟩 20                 Pass: 100%/21  | Total: 10h 31m | Avg: 30m 02s | Max:  1h 28m | Hits: 323%/3672  
    
  • 🟩 cudax: Pass: 100%/20 | Total: 1h 45m | Avg: 5m 17s | Max: 17m 01s | Hits: 388%/522

    🟩 cpu
      🟩 amd64              Pass: 100%/16  | Total:  1h 35m | Avg:  5m 57s | Max: 17m 01s | Hits: 388%/522   
      🟩 arm64              Pass: 100%/4   | Total: 10m 29s | Avg:  2m 37s | Max:  2m 40s
    🟩 ctk
      🟩 12.0               Pass: 100%/1   | Total: 10m 44s | Avg: 10m 44s | Max: 10m 44s | Hits: 388%/261   
      🟩 12.5               Pass: 100%/2   | Total: 10m 15s | Avg:  5m 07s | Max:  5m 10s
      🟩 12.6               Pass: 100%/17  | Total:  1h 24m | Avg:  4m 59s | Max: 17m 01s | Hits: 388%/261   
    🟩 cudacxx
      🟩 nvcc12.0           Pass: 100%/1   | Total: 10m 44s | Avg: 10m 44s | Max: 10m 44s | Hits: 388%/261   
      🟩 nvcc12.5           Pass: 100%/2   | Total: 10m 15s | Avg:  5m 07s | Max:  5m 10s
      🟩 nvcc12.6           Pass: 100%/17  | Total:  1h 24m | Avg:  4m 59s | Max: 17m 01s | Hits: 388%/261   
    🟩 cudacxx_family
      🟩 nvcc               Pass: 100%/20  | Total:  1h 45m | Avg:  5m 17s | Max: 17m 01s | Hits: 388%/522   
    🟩 cxx
      🟩 Clang14            Pass: 100%/1   | Total:  3m 03s | Avg:  3m 03s | Max:  3m 03s
      🟩 Clang15            Pass: 100%/1   | Total:  3m 17s | Avg:  3m 17s | Max:  3m 17s
      🟩 Clang16            Pass: 100%/1   | Total:  3m 06s | Avg:  3m 06s | Max:  3m 06s
      🟩 Clang17            Pass: 100%/1   | Total:  3m 16s | Avg:  3m 16s | Max:  3m 16s
      🟩 Clang18            Pass: 100%/4   | Total: 23m 40s | Avg:  5m 55s | Max: 15m 12s
      🟩 GCC10              Pass: 100%/1   | Total:  2m 57s | Avg:  2m 57s | Max:  2m 57s
      🟩 GCC11              Pass: 100%/1   | Total:  2m 57s | Avg:  2m 57s | Max:  2m 57s
      🟩 GCC12              Pass: 100%/2   | Total: 20m 11s | Avg: 10m 05s | Max: 17m 01s
      🟩 GCC13              Pass: 100%/4   | Total: 10m 18s | Avg:  2m 34s | Max:  2m 37s
      🟩 MSVC14.36          Pass: 100%/1   | Total: 10m 44s | Avg: 10m 44s | Max: 10m 44s | Hits: 388%/261   
      🟩 MSVC14.39          Pass: 100%/1   | Total: 12m 03s | Avg: 12m 03s | Max: 12m 03s | Hits: 388%/261   
      🟩 NVHPC24.7          Pass: 100%/2   | Total: 10m 15s | Avg:  5m 07s | Max:  5m 10s
    🟩 cxx_family
      🟩 Clang              Pass: 100%/8   | Total: 36m 22s | Avg:  4m 32s | Max: 15m 12s
      🟩 GCC                Pass: 100%/8   | Total: 36m 23s | Avg:  4m 32s | Max: 17m 01s
      🟩 MSVC               Pass: 100%/2   | Total: 22m 47s | Avg: 11m 23s | Max: 12m 03s | Hits: 388%/522   
      🟩 NVHPC              Pass: 100%/2   | Total: 10m 15s | Avg:  5m 07s | Max:  5m 10s
    🟩 gpu
      🟩 v100               Pass: 100%/20  | Total:  1h 45m | Avg:  5m 17s | Max: 17m 01s | Hits: 388%/522   
    🟩 jobs
      🟩 Build              Pass: 100%/18  | Total:  1h 13m | Avg:  4m 05s | Max: 12m 03s | Hits: 388%/522   
      🟩 Test               Pass: 100%/2   | Total: 32m 13s | Avg: 16m 06s | Max: 17m 01s
    🟩 sm
      🟩 90                 Pass: 100%/1   | Total:  2m 32s | Avg:  2m 32s | Max:  2m 32s
      🟩 90a                Pass: 100%/1   | Total:  2m 37s | Avg:  2m 37s | Max:  2m 37s
    🟩 std
      🟩 17                 Pass: 100%/4   | Total: 12m 55s | Avg:  3m 13s | Max:  5m 10s
      🟩 20                 Pass: 100%/16  | Total:  1h 32m | Avg:  5m 48s | Max: 17m 01s | Hits: 388%/522   
    
  • 🟩 cccl_c_parallel: Pass: 100%/2 | Total: 11m 54s | Avg: 5m 57s | Max: 9m 46s

    🟩 cpu
      🟩 amd64              Pass: 100%/2   | Total: 11m 54s | Avg:  5m 57s | Max:  9m 46s
    🟩 ctk
      🟩 12.6               Pass: 100%/2   | Total: 11m 54s | Avg:  5m 57s | Max:  9m 46s
    🟩 cudacxx
      🟩 nvcc12.6           Pass: 100%/2   | Total: 11m 54s | Avg:  5m 57s | Max:  9m 46s
    🟩 cudacxx_family
      🟩 nvcc               Pass: 100%/2   | Total: 11m 54s | Avg:  5m 57s | Max:  9m 46s
    🟩 cxx
      🟩 GCC13              Pass: 100%/2   | Total: 11m 54s | Avg:  5m 57s | Max:  9m 46s
    🟩 cxx_family
      🟩 GCC                Pass: 100%/2   | Total: 11m 54s | Avg:  5m 57s | Max:  9m 46s
    🟩 gpu
      🟩 v100               Pass: 100%/2   | Total: 11m 54s | Avg:  5m 57s | Max:  9m 46s
    🟩 jobs
      🟩 Build              Pass: 100%/1   | Total:  2m 08s | Avg:  2m 08s | Max:  2m 08s
      🟩 Test               Pass: 100%/1   | Total:  9m 46s | Avg:  9m 46s | Max:  9m 46s
    
  • 🟩 python: Pass: 100%/1 | Total: 42m 14s | Avg: 42m 14s | Max: 42m 14s

    🟩 cpu
      🟩 amd64              Pass: 100%/1   | Total: 42m 14s | Avg: 42m 14s | Max: 42m 14s
    🟩 ctk
      🟩 12.6               Pass: 100%/1   | Total: 42m 14s | Avg: 42m 14s | Max: 42m 14s
    🟩 cudacxx
      🟩 nvcc12.6           Pass: 100%/1   | Total: 42m 14s | Avg: 42m 14s | Max: 42m 14s
    🟩 cudacxx_family
      🟩 nvcc               Pass: 100%/1   | Total: 42m 14s | Avg: 42m 14s | Max: 42m 14s
    🟩 cxx
      🟩 GCC13              Pass: 100%/1   | Total: 42m 14s | Avg: 42m 14s | Max: 42m 14s
    🟩 cxx_family
      🟩 GCC                Pass: 100%/1   | Total: 42m 14s | Avg: 42m 14s | Max: 42m 14s
    🟩 gpu
      🟩 v100               Pass: 100%/1   | Total: 42m 14s | Avg: 42m 14s | Max: 42m 14s
    🟩 jobs
      🟩 Test               Pass: 100%/1   | Total: 42m 14s | Avg: 42m 14s | Max: 42m 14s
    

👃 Inspect Changes

Modifications in project?

Project
CCCL Infrastructure
+/- libcu++
CUB
Thrust
CUDA Experimental
python
CCCL C Parallel Library
Catch2Helper

Modifications in project or dependencies?

Project
CCCL Infrastructure
+/- libcu++
+/- CUB
+/- Thrust
+/- CUDA Experimental
+/- python
+/- CCCL C Parallel Library
+/- Catch2Helper

🏃‍ Runner counts (total jobs: 135)

# Runner
92 linux-amd64-cpu16
17 linux-amd64-gpu-v100-latest-1
15 windows-amd64-cpu16
10 linux-arm64-cpu16
1 linux-amd64-gpu-h100-latest-1-testing

@bernhardmgruber bernhardmgruber merged commit 030fcd7 into NVIDIA:main Jan 22, 2025
146 of 149 checks passed
@bernhardmgruber bernhardmgruber linked an issue Jan 22, 2025 that may be closed by this pull request
@bernhardmgruber bernhardmgruber deleted the ext_fp branch January 22, 2025 15:46
bernhardmgruber added a commit to bernhardmgruber/cccl that referenced this pull request Jan 22, 2025
Also ensure that we actually can enable FP8 due to FP16 and BF16 requirements

Co-authored-by: Michael Schellenberger Costa <[email protected]>
davebayer pushed a commit to davebayer/cccl that referenced this pull request Jan 22, 2025
Also ensure that we actually can enable FP8 due to FP16 and BF16 requirements

Co-authored-by: Michael Schellenberger Costa <[email protected]>
davebayer pushed a commit to davebayer/cccl that referenced this pull request Jan 22, 2025
Also ensure that we actually can enable FP8 due to FP16 and BF16 requirements

Co-authored-by: Michael Schellenberger Costa <[email protected]>
davebayer added a commit to davebayer/cccl that referenced this pull request Jan 22, 2025
update docs

update docs

add `memcmp`, `memmove` and `memchr` implementations

implement tests

Use cuda::std::min/max in Thrust (NVIDIA#3364)

Implement `cuda::std::numeric_limits` for `__half` and `__nv_bfloat16` (NVIDIA#3361)

* implement `cuda::std::numeric_limits` for `__half` and `__nv_bfloat16`

Cleanup util_arch (NVIDIA#2773)

Deprecate thrust::null_type (NVIDIA#3367)

Deprecate cub::DeviceSpmv (NVIDIA#3320)

Fixes: NVIDIA#896

Improves `DeviceSegmentedSort` test run time for large number of items and segments (NVIDIA#3246)

* fixes segment offset generation

* switches to analytical verification

* switches to analytical verification for pairs

* fixes spelling

* adds tests for large number of segments

* fixes narrowing conversion in tests

* addresses review comments

* fixes includes

Compile basic infra test with C++17 (NVIDIA#3377)

Adds support for large number of items and large number of segments to `DeviceSegmentedSort` (NVIDIA#3308)

* fixes segment offset generation

* switches to analytical verification

* switches to analytical verification for pairs

* addresses review comments

* introduces segment offset type

* adds tests for large number of segments

* adds support for large number of segments

* drops segment offset type

* fixes thrust namespace

* removes about-to-be-deprecated cub iterators

* no exec specifier on defaulted ctor

* fixes gcc7 linker error

* uses local_segment_index_t throughout

* determine offset type based on type returned by segment iterator begin/end iterators

* minor style improvements

Exit with error when RAPIDS CI fails. (NVIDIA#3385)

cuda.parallel: Support structured types as algorithm inputs (NVIDIA#3218)

* Introduce gpu_struct decorator and typing

* Enable `reduce` to accept arrays of structs as inputs

* Add test for reducing arrays-of-struct

* Update documentation

* Use a numpy array rather than ctypes object

* Change zeros -> empty for output array and temp storage

* Add a TODO for typing GpuStruct

* Documentation udpates

* Remove test_reduce_struct_type from test_reduce.py

* Revert to `to_cccl_value()` accepting ndarray + GpuStruct

* Bump copyrights

---------

Co-authored-by: Ashwin Srinath <[email protected]>

Deprecate thrust::async (NVIDIA#3324)

Fixes: NVIDIA#100

Review/Deprecate CUB `util.ptx` for CCCL 2.x (NVIDIA#3342)

Fix broken `_CCCL_BUILTIN_ASSUME` macro (NVIDIA#3314)

* add compiler-specific path
* fix device code path
* add _CCC_ASSUME

Deprecate thrust::numeric_limits (NVIDIA#3366)

Replace `typedef` with `using` in libcu++ (NVIDIA#3368)

Deprecate thrust::optional (NVIDIA#3307)

Fixes: NVIDIA#3306

Upgrade to Catch2 3.8  (NVIDIA#3310)

Fixes: NVIDIA#1724

refactor `<cuda/std/cstdint>` (NVIDIA#3325)

Co-authored-by: Bernhard Manfred Gruber <[email protected]>

Update CODEOWNERS (NVIDIA#3331)

* Update CODEOWNERS

* Update CODEOWNERS

* Update CODEOWNERS

* [pre-commit.ci] auto code formatting

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

Fix sign-compare warning (NVIDIA#3408)

Implement more cmath functions to be usable on host and device (NVIDIA#3382)

* Implement more cmath functions to be usable on host and device

* Implement math roots functions

* Implement exponential functions

Redefine and deprecate thrust::remove_cvref (NVIDIA#3394)

* Redefine and deprecate thrust::remove_cvref

Co-authored-by: Michael Schellenberger Costa <[email protected]>

Fix assert definition for NVHPC due to constexpr issues (NVIDIA#3418)

NVHPC cannot decide at compile time where the code would run so _CCCL_ASSERT within a constexpr function breaks it.

Fix this by always using the host definition which should also work on device.

Fixes NVIDIA#3411

Extend CUB reduce benchmarks (NVIDIA#3401)

* Rename max.cu to custom.cu, since it uses a custom operator
* Extend types covered my min.cu to all fundamental types
* Add some notes on how to collect tuning parameters

Fixes: NVIDIA#3283

Update upload-pages-artifact to v3 (NVIDIA#3423)

* Update upload-pages-artifact to v3

* Empty commit

---------

Co-authored-by: Ashwin Srinath <[email protected]>

Replace and deprecate thrust::cuda_cub::terminate (NVIDIA#3421)

`std::linalg` accessors and `transposed_layout` (NVIDIA#2962)

Add round up/down to multiple (NVIDIA#3234)

[FEA]: Introduce Python module with CCCL headers (NVIDIA#3201)

* Add cccl/python/cuda_cccl directory and use from cuda_parallel, cuda_cooperative

* Run `copy_cccl_headers_to_aude_include()` before `setup()`

* Create python/cuda_cccl/cuda/_include/__init__.py, then simply import cuda._include to find the include path.

* Add cuda.cccl._version exactly as for cuda.cooperative and cuda.parallel

* Bug fix: cuda/_include only exists after shutil.copytree() ran.

* Use `f"cuda-cccl @ file://{cccl_path}/python/cuda_cccl"` in setup.py

* Remove CustomBuildCommand, CustomWheelBuild in cuda_parallel/setup.py (they are equivalent to the default functions)

* Replace := operator (needs Python 3.8+)

* Fix oversights: remove `pip3 install ./cuda_cccl` lines from README.md

* Restore original README.md: `pip3 install -e` now works on first pass.

* cuda_cccl/README.md: FOR INTERNAL USE ONLY

* Remove `$pymajor.$pyminor.` prefix in cuda_cccl _version.py (as suggested under NVIDIA#3201 (comment))

Command used: ci/update_version.sh 2 8 0

* Modernize pyproject.toml, setup.py

Trigger for this change:

* NVIDIA#3201 (comment)

* NVIDIA#3201 (comment)

* Install CCCL headers under cuda.cccl.include

Trigger for this change:

* NVIDIA#3201 (comment)

Unexpected accidental discovery: cuda.cooperative unit tests pass without CCCL headers entirely.

* Factor out cuda_cccl/cuda/cccl/include_paths.py

* Reuse cuda_cccl/cuda/cccl/include_paths.py from cuda_cooperative

* Add missing Copyright notice.

* Add missing __init__.py (cuda.cccl)

* Add `"cuda.cccl"` to `autodoc.mock_imports`

* Move cuda.cccl.include_paths into function where it is used. (Attempt to resolve Build and Verify Docs failure.)

* Add # TODO: move this to a module-level import

* Modernize cuda_cooperative/pyproject.toml, setup.py

* Convert cuda_cooperative to use hatchling as build backend.

* Revert "Convert cuda_cooperative to use hatchling as build backend."

This reverts commit 61637d6.

* Move numpy from [build-system] requires -> [project] dependencies

* Move pyproject.toml [project] dependencies -> setup.py install_requires, to be able to use CCCL_PATH

* Remove copy_license() and use license_files=["../../LICENSE"] instead.

* Further modernize cuda_cccl/setup.py to use pathlib

* Trivial simplifications in cuda_cccl/pyproject.toml

* Further simplify cuda_cccl/pyproject.toml, setup.py: remove inconsequential code

* Make cuda_cooperative/pyproject.toml more similar to cuda_cccl/pyproject.toml

* Add taplo-pre-commit to .pre-commit-config.yaml

* taplo-pre-commit auto-fixes

* Use pathlib in cuda_cooperative/setup.py

* CCCL_PYTHON_PATH in cuda_cooperative/setup.py

* Modernize cuda_parallel/pyproject.toml, setup.py

* Use pathlib in cuda_parallel/setup.py

* Add `# TOML lint & format` comment.

* Replace MANIFEST.in with `[tool.setuptools.package-data]` section in pyproject.toml

* Use pathlib in cuda/cccl/include_paths.py

* pre-commit autoupdate (EXCEPT clang-format, which was manually restored)

* Fixes after git merge main

* Resolve warning: AttributeError: '_Reduce' object has no attribute 'build_result'

```
=========================================================================== warnings summary ===========================================================================
tests/test_reduce.py::test_reduce_non_contiguous
  /home/coder/cccl/python/devenv/lib/python3.12/site-packages/_pytest/unraisableexception.py:85: PytestUnraisableExceptionWarning: Exception ignored in: <function _Reduce.__del__ at 0x7bf123139080>

  Traceback (most recent call last):
    File "/home/coder/cccl/python/cuda_parallel/cuda/parallel/experimental/algorithms/reduce.py", line 132, in __del__
      bindings.cccl_device_reduce_cleanup(ctypes.byref(self.build_result))
                                                       ^^^^^^^^^^^^^^^^^
  AttributeError: '_Reduce' object has no attribute 'build_result'

    warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
============================================================= 1 passed, 93 deselected, 1 warning in 0.44s ==============================================================
```

* Move `copy_cccl_headers_to_cuda_cccl_include()` functionality to `class CustomBuildPy`

* Introduce cuda_cooperative/constraints.txt

* Also add cuda_parallel/constraints.txt

* Add `--constraint constraints.txt` in ci/test_python.sh

* Update Copyright dates

* Switch to https://github.com/ComPWA/taplo-pre-commit (the other repo has been archived by the owner on Jul 1, 2024)

For completeness: The other repo took a long time to install into the pre-commit cache; so long it lead to timeouts in the CCCL CI.

* Remove unused cuda_parallel jinja2 dependency (noticed by chance).

* Remove constraints.txt files, advertise running `pip install cuda-cccl` first instead.

* Make cuda_cooperative, cuda_parallel testing completely independent.

* Run only test_python.sh [skip-rapids][skip-matx][skip-docs][skip-vdc]

* Try using another runner (because V100 runners seem to be stuck) [skip-rapids][skip-matx][skip-docs][skip-vdc]

* Fix sign-compare warning (NVIDIA#3408) [skip-rapids][skip-matx][skip-docs][skip-vdc]

* Revert "Try using another runner (because V100 runners seem to be stuck) [skip-rapids][skip-matx][skip-docs][skip-vdc]"

This reverts commit ea33a21.

Error message: NVIDIA#3201 (comment)

* Try using A100 runner (because V100 runners still seem to be stuck) [skip-rapids][skip-matx][skip-docs][skip-vdc]

* Also show cuda-cooperative site-packages, cuda-parallel site-packages (after pip install) [skip-rapids][skip-matx][skip-docs][skip-vdc]

* Try using l4 runner (because V100 runners still seem to be stuck) [skip-rapids][skip-matx][skip-docs][skip-vdc]

* Restore original ci/matrix.yaml [skip-rapids]

* Use for loop in test_python.sh to avoid code duplication.

* Run only test_python.sh [skip-rapids][skip-matx][skip-docs][skip-vdc][skip pre-commit.ci]

* Comment out taplo-lint in pre-commit config [skip-rapids][skip-matx][skip-docs][skip-vdc]

* Revert "Run only test_python.sh [skip-rapids][skip-matx][skip-docs][skip-vdc][skip pre-commit.ci]"

This reverts commit ec206fd.

* Implement suggestion by @shwina (NVIDIA#3201 (review))

* Address feedback by @leofang

---------

Co-authored-by: Bernhard Manfred Gruber <[email protected]>

cuda.parallel: Add optional stream argument to reduce_into() (NVIDIA#3348)

* Add optional stream argument to reduce_into()

* Add tests to check for reduce_into() stream behavior

* Move protocol related utils to separate file and rework __cuda_stream__ error messages

* Fix synchronization issue in stream test and add one more invalid stream test case

* Rename cuda stream validation function after removing leading underscore

* Unpack values from __cuda_stream__ instead of indexing

* Fix linting errors

* Handle TypeError when unpacking invalid __cuda_stream__ return

* Use stream to allocate cupy memory in new stream test

Upgrade to actions/deploy-pages@v4 (from v2), as suggested by @leofang (NVIDIA#3434)

Deprecate `cub::{min, max}` and replace internal uses with those from libcu++ (NVIDIA#3419)

* Deprecate `cub::{min, max}` and replace internal uses with those from libcu++

Fixes NVIDIA#3404

Fix CI issues (NVIDIA#3443)

Remove deprecated `cub::min` (NVIDIA#3450)

* Remove deprecated `cuda::{min,max}`

* Drop unused `thrust::remove_cvref` file

Fix typo in builtin (NVIDIA#3451)

Moves agents to `detail::<algorithm_name>` namespace (NVIDIA#3435)

uses unsigned offset types in thrust's scan dispatch (NVIDIA#3436)

Default transform_iterator's copy ctor (NVIDIA#3395)

Fixes: NVIDIA#2393

Turn C++ dialect warning into error (NVIDIA#3453)

Uses unsigned offset types in thrust's sort algorithm calling into `DispatchMergeSort` (NVIDIA#3437)

* uses thrust's dynamic dispatch for merge_sort

* [pre-commit.ci] auto code formatting

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

Refactor allocator handling of contiguous_storage (NVIDIA#3050)

Co-authored-by: Michael Schellenberger Costa <[email protected]>

Drop thrust::detail::integer_traits (NVIDIA#3391)

Add cuda::is_floating_point supporting half and bfloat (NVIDIA#3379)

Co-authored-by: Michael Schellenberger Costa <[email protected]>

Improve docs of std headers (NVIDIA#3416)

Drop C++11 and C++14 support for all of cccl (NVIDIA#3417)

* Drop C++11 and C++14 support for all of cccl

---------

Co-authored-by: Bernhard Manfred Gruber <[email protected]>

Deprecate a few CUB macros (NVIDIA#3456)

Deprecate thrust universal iterator categories (NVIDIA#3461)

Fix launch args order (NVIDIA#3465)

Add `--extended-lambda` to the list of removed clangd flags (NVIDIA#3432)

add `_CCCL_HAS_NVFP8` macro (NVIDIA#3429)

Add `_CCCL_BUILTIN_PREFETCH` (NVIDIA#3433)

Drop universal iterator categories (NVIDIA#3474)

Ensure that headers in `<cuda/*>` can be build with a C++ only compiler (NVIDIA#3472)

Specialize __is_extended_floating_point for FP8 types (NVIDIA#3470)

Also ensure that we actually can enable FP8 due to FP16 and BF16 requirements

Co-authored-by: Michael Schellenberger Costa <[email protected]>

Moves CUB kernel entry points to a detail namespace (NVIDIA#3468)

* moves emptykernel to detail ns

* second batch

* third batch

* fourth batch

* fixes cuda parallel

* concatenates nested namespaces

Deprecate block/warp algo specializations (NVIDIA#3455)

Fixes: NVIDIA#3409

Refactor CUB's util_debug (NVIDIA#3345)
davebayer pushed a commit to davebayer/cccl that referenced this pull request Jan 22, 2025
Also ensure that we actually can enable FP8 due to FP16 and BF16 requirements

Co-authored-by: Michael Schellenberger Costa <[email protected]>
miscco added a commit that referenced this pull request Jan 22, 2025
* add `_CCCL_HAS_NVFP8` macro (#3429)

* Add cuda::is_floating_point supporting half and bfloat (#3379)

Co-authored-by: Michael Schellenberger Costa <[email protected]>

* Specialize __is_extended_floating_point for FP8 types (#3470)

Also ensure that we actually can enable FP8 due to FP16 and BF16 requirements

Co-authored-by: Michael Schellenberger Costa <[email protected]>

---------

Co-authored-by: Federico Busato <[email protected]>
Co-authored-by: Michael Schellenberger Costa <[email protected]>
davebayer pushed a commit to davebayer/cccl that referenced this pull request Jan 23, 2025
Also ensure that we actually can enable FP8 due to FP16 and BF16 requirements

Co-authored-by: Michael Schellenberger Costa <[email protected]>
davebayer added a commit to davebayer/cccl that referenced this pull request Jan 23, 2025
Cleanup util_arch (NVIDIA#2773)

Improves `DeviceSegmentedSort` test run time for large number of items and segments (NVIDIA#3246)

* fixes segment offset generation

* switches to analytical verification

* switches to analytical verification for pairs

* fixes spelling

* adds tests for large number of segments

* fixes narrowing conversion in tests

* addresses review comments

* fixes includes

Adds support for large number of items and large number of segments to `DeviceSegmentedSort` (NVIDIA#3308)

* fixes segment offset generation

* switches to analytical verification

* switches to analytical verification for pairs

* addresses review comments

* introduces segment offset type

* adds tests for large number of segments

* adds support for large number of segments

* drops segment offset type

* fixes thrust namespace

* removes about-to-be-deprecated cub iterators

* no exec specifier on defaulted ctor

* fixes gcc7 linker error

* uses local_segment_index_t throughout

* determine offset type based on type returned by segment iterator begin/end iterators

* minor style improvements

cuda.parallel: Support structured types as algorithm inputs (NVIDIA#3218)

* Introduce gpu_struct decorator and typing

* Enable `reduce` to accept arrays of structs as inputs

* Add test for reducing arrays-of-struct

* Update documentation

* Use a numpy array rather than ctypes object

* Change zeros -> empty for output array and temp storage

* Add a TODO for typing GpuStruct

* Documentation udpates

* Remove test_reduce_struct_type from test_reduce.py

* Revert to `to_cccl_value()` accepting ndarray + GpuStruct

* Bump copyrights

---------

Co-authored-by: Ashwin Srinath <[email protected]>

Deprecate thrust::async (NVIDIA#3324)

Fixes: NVIDIA#100

Review/Deprecate CUB `util.ptx` for CCCL 2.x (NVIDIA#3342)

Deprecate thrust::numeric_limits (NVIDIA#3366)

Upgrade to Catch2 3.8  (NVIDIA#3310)

Fixes: NVIDIA#1724

Fix sign-compare warning (NVIDIA#3408)

Implement more cmath functions to be usable on host and device (NVIDIA#3382)

* Implement more cmath functions to be usable on host and device

* Implement math roots functions

* Implement exponential functions

Redefine and deprecate thrust::remove_cvref (NVIDIA#3394)

* Redefine and deprecate thrust::remove_cvref

Co-authored-by: Michael Schellenberger Costa <[email protected]>

cuda.parallel: Add optional stream argument to reduce_into() (NVIDIA#3348)

* Add optional stream argument to reduce_into()

* Add tests to check for reduce_into() stream behavior

* Move protocol related utils to separate file and rework __cuda_stream__ error messages

* Fix synchronization issue in stream test and add one more invalid stream test case

* Rename cuda stream validation function after removing leading underscore

* Unpack values from __cuda_stream__ instead of indexing

* Fix linting errors

* Handle TypeError when unpacking invalid __cuda_stream__ return

* Use stream to allocate cupy memory in new stream test

Deprecate `cub::{min, max}` and replace internal uses with those from libcu++ (NVIDIA#3419)

* Deprecate `cub::{min, max}` and replace internal uses with those from libcu++

Fixes NVIDIA#3404

Remove deprecated `cub::min` (NVIDIA#3450)

* Remove deprecated `cuda::{min,max}`

* Drop unused `thrust::remove_cvref` file

Fix typo in builtin (NVIDIA#3451)

Moves agents to `detail::<algorithm_name>` namespace (NVIDIA#3435)

Drop thrust::detail::integer_traits (NVIDIA#3391)

Add cuda::is_floating_point supporting half and bfloat (NVIDIA#3379)

Co-authored-by: Michael Schellenberger Costa <[email protected]>

add `_CCCL_HAS_NVFP8` macro (NVIDIA#3429)

Specialize __is_extended_floating_point for FP8 types (NVIDIA#3470)

Also ensure that we actually can enable FP8 due to FP16 and BF16 requirements

Co-authored-by: Michael Schellenberger Costa <[email protected]>

Moves CUB kernel entry points to a detail namespace (NVIDIA#3468)

* moves emptykernel to detail ns

* second batch

* third batch

* fourth batch

* fixes cuda parallel

* concatenates nested namespaces

Deprecate block/warp algo specializations (NVIDIA#3455)

Fixes: NVIDIA#3409

fix documentation
davebayer pushed a commit to davebayer/cccl that referenced this pull request Jan 29, 2025
Also ensure that we actually can enable FP8 due to FP16 and BF16 requirements

Co-authored-by: Michael Schellenberger Costa <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Add cuda::is_floating_point support for FP8 types
2 participants