2.9.0 (2022-04-21)
- First layer Convolution kernels specialized for small channel counts and reduced alignment
- Few channels specialization for reduced alignment capabilities
- Fixed channels further specialized when channel count perfectly matches the access vector size
- Unit tests
- Python-based instance emitter in the CUTLASS Library and support in the Profiler
- BLAS3 operators accelerated by Tensor Cores
- CUTLASS Python demonstrating JIT compilation of CUTLASS kernels and a Python-based runtime using CUDA Python
- Python-based runtime interoperable with existing emitters
- GEMM + Softmax example
- Gather and Scatter Fusion with GEMM can gather inputs and scatters outputs based on indices vectors in the same GEMM kernel.
- It can select random rows in a row major matrix.
- It can select random columns in a column major matrix.
- Back-to-back GEMM/CONV fully supports buffering the previous GEMM/CONV results in the shared memory for the latter one to use. It can eliminate register spill when the tile size is big.
- Supported kernels: GEMM and CONV.
- Supported types: fp16 and int8.
- Supported architectures: Turing and Ampere.
- Transposed Convolution (a.k.a Deconvolution) support which reuses Dgrad implementation.
- Utility functions that can pad NHWC and convert between NCHW and NHWC.
- Small alignment implicit gemm support for Fprop/Dgrad/Wgrad so that padding is no longer mandated to use tensor cores in these kernels.
- Epilogue enhancement:
- Eliminate bank conflicts in int8 tensor core kernels.
- Half2 usage if epilogue compute type is fp16.
- More activation functions: Silu, Hardswish.
- New elementwise fusion pattern for residual block.
- Parallel GEMM splitk support in the CUTLASS profiler.
- Optimal performance using CUDA 11.6u2
- Updates and bugfixes from the community (thanks!)
2.8.0 (2021-11-19)
-
TF32x3: emulated single-precision using Tensor Cores
- 45+ TFLOPs on NVIDIA A100
- GEMM SDK example (real)
- COMPLEX GEMM SDK example (complex)
- Implicit GEMM Convolution SDK example
-
Mainloop fusion for Convolution: convolution with fused per-channel scale-bias-relu
-
Grouped GEMM: similar to batched GEMM with distinct problem size per group
- SDK example with performance comparison with Batched Strided GEMM
- cutlass::gemm::device::GemmGrouped
-
Implicit GEMM Convolution fusion supports staging 1st convolution's output accumulator in the shared memory on Turing. This allows more flexible warp tile sizes and less regsiter pressue.
-
Optimal performance using CUDA 11.5
-
Updates from the community (thanks!)
-
Deprecation announcement: CUTLASS plans to deprecate the following:
- Maxwell and Pascal GPU architectures
- Ubuntu 16.04
- CUDA 10.2
2.7.0 (2021-09-24)
- Mainloop fusion for GEMM: summation over A or B
- Strided DGRAD (optimized iterators)
- Half-precision GELU_taylor activation functions
- Use these when accumulation and epilogue compute types are all
cutlass::half_t
- Use these when accumulation and epilogue compute types are all
- Tuning and bug fixes to fused GEMM + GEMM example
- Support for smaller than 128b aligned Convolutions: see examples
- Caching of results to accelerate Convolution unit tests
- Can be enabled or disabled by running
cmake .. -DCUTLASS_TEST_ENABLE_CACHED_RESULTS=OFF
- Can be enabled or disabled by running
- Corrections and bug fixes reported by the CUTLASS community
- Thank you for filing these issues!
2.6.1 (2021-09-03)
- Arbitrary padding and striding for CUTLASS Strided DGRAD Convolution operator (Analytic Iterators)
- Tuning for GEMMs fused with partial reductions
- Corrections and bug fixes reported by the CUTLASS community
- Thank you for filing these issues!
2.6.0 (2021-07-22)
- Optimal performance when compiled with the CUDA 11.4 Toolkit
- Adopt the new L2 prefetch feature in cp.async and global load
- Fused operators with GEMM and Convolution
- 64b tensor strides and leading dimensions support for GEMMs
- Affine rank=2 matrix layouts
- Row stride and column stride for matrices using cutlass::layout::AffineRank2
- Support FP64 tensor core and SIMT GEMM.
- Batched GEMV preview implementation
- New strided Dgrad implementation
- Accelerates over previous implementation by cutting down redundant math by 4x
- Support using new
Dy
andw
analytic iterators and existingcutlass::conv::device::ImplicitGemmConvolution
interface
- Quaternion-valued GEMM and Convolution in single- and double-precision (targeting CUDA Cores)
- Updates to quaternion.h and functional.h
- SDK Example for GEMM and Convolution
- Unit tests for GEMM and Convolution
- Many improvements to the epilogue.
- Provide an option to not fully unroll the epilogue to reduce the code size and improve the performance when using complicated elementwise operations
- Performance improvement for FP16 tensor core kernels
- Bug fixes
- Enhanced Clang support and the combination of Clang 13 and CUDA 11.4 can build and run kernels from Pascal and Ampere.
- Updated minimum CUDA Toolkit requirement to 10.2
- CUDA 11.4 Toolkit recommended
- Corrections and bug fixes reported by the CUTLASS community
- Thank you for filing these issues!
2.5.0 (2021-02-26)
- Tensor reductions
- m-to-n reductions of tensors with affine layout
- Specializations for reductions including contiguous dimension
- Specializations for reductions excluding contiguous dimension
- Custom reduction functors such as
cutlass::logical_and
- Large tensor support, up to 2^63 elements (however, each dimension is limited to an extent of 2^31)
- Optimizations for 3-D convolution
- Optimized tile iterators using precomputed delta table for 3-D convolution
- Full coverage of forward and backwards passes for 3D convolution
- Fused Convolution+Convolution example
- Corrections and bug fixes reported by the CUTLASS community
- Thank you for filing these issues!
2.4.0 (2020-11-19)
- Implicit GEMM convolution kernels supporting CUDA and Tensor Cores on NVIDIA GPUs
- Operators: forward (Fprop), backward data gradient (Dgrad), and backward weight gradient (Wgrad) convolution
- Data type: FP32, complex, Tensor Float 32 (TF32), BFloat16 (BF16), Float16, Int4, Int8, Int32
- Spatial dimensions: 1-D, 2-D, and 3-D
- Layout: NHWC, NCxHWx
- Implicit GEMM convolution components:
- Global memory iterators supporting Fprop, Dgrad, and Wgrad
MmaMultistage
for implicit GEMM convolution for NVIDIA Ampere architectureMmaPipeline
for implicit GEMM convolution for NVIDIA Volta and Turing architectures- Documentation describing Implicit GEMM Convolution algorithm and implementation
2.3.0 (2020-09-23)
- NVIDIA Ampere Architecture features
- Sparse Tensor Core GEMM kernels:
- Direct access to Sparse Tensor Cores and maximum performance via
mma.sp.sync
- Direct access to Sparse Tensor Cores and maximum performance via
- Fast SGEMM targeting GeForce RTX 30-series CUDA Cores
- Sparse Tensor Core GEMM kernels:
- Minor Features:
- Activation functions such as GeLU and Sigmoid
- Small matrix and quaternion template classes in device code
- Floating-point constants
- NVIDIA Ampere GPU Architecture examples and documentation:
- Tensor Float 32 and
- Sparse Tensor Cores
- Documentation added on CUTLASS efficient row-major epilogue
2.2.0 (2020-06-08)
- NVIDIA Ampere Architecture features
- Fast Tensor Core operations:
- Maximum performance via
mma.sync
- Tensor Float 32, BFloat16, and double-precision data types
- Mixed integer data types (int8, int4, bin1)
- Asynchronous copy for deep software pipelines via
cp.async
- Described in GTC 2020 Webinar (SR 21745) (free registration required)
- Features:
- SDK examples showing GEMM fused with bias+relu and fused GEMM+GEMM
- Complex-valued GEMMs targeting NVIDIA Ampere Tensor Cores in double-precision and Tensor Float 32
- Gaussian complex GEMMs using 3m complex multiply algorithm
- Universal GEMM kernel supporting two batch modes and two algorithms for parallel reductions
- Policy updates:
- CUDA 11 Toolkit needed to enable NVIDIA Ampere Architecture features
- Disabled F16C by default for compatibility - enable on cmake command line with
-DCUTLASS_ENABLE_F16C=ON
2.1.0 (2020-04-06)
- BLAS-style host-side API added to CUTLASS Library
- API to launch compiled kernel instances for GEMM and planar complex GEMM
- Planar Complex GEMM kernels targeting Volta and Turing Tensor Cores
- Computes complex matrix products on matrices stored as disjoint real and imaginary parts
- SDK Examples of Planar Complex GEMMs
- Minor enhancements and bug fixes
2.0.0 (2019-11-19)
- Substantially refactored for
- Better performance, particularly for native Turing Tensor Cores
- Robust and durable templates spanning the design space
- Encapsulated functionality embodying modern C++11 programming techniques
- Optimized containers and data types for efficient, generic, portable device code
- Updates to:
- Native Turing Tensor Cores
- Efficient GEMM kernels targeting Turing Tensor Cores
- Mixed-precision floating point, 8-bit integer, 4-bit integer, and binarized operands
- Coverage of existing CUTLASS functionality
- GEMM kernels targeting CUDA and Tensor Cores in NVIDIA GPUs
- Volta Tensor Cores through native mma.sync and through WMMA API
- Optimizations such as parallel reductions, threadblock rasterization, and intra-threadblock reductions
- Batched GEMM operations
- Complex-valued GEMMs
- Note: a host compiler supporting C++11 or greater is required.
1.3.2 (2019-07-09)
- Performance improvement for Volta Tensor Cores TN and TT layouts.
1.3.1 (2019-04-09)
- Corrected NVRTC unit tests.
1.3.0 (2019-03-20)
- Efficient GEMM kernel targeting Volta Tensor Cores via
mma.sync
instruction added in CUDA 10.1.
1.2.0 (2018-10-26)
- Parallelized reductions across threadblocks ("Split-K")
- Improved IGEMM performance
- Batched strided WMMA GEMMs
1.1.0 (2018-09-19)
- Turing Features
- WMMA GEMM targeting TensorCores - INT8, INT4, 1-bit
- Batched Strided GEMM
- Threadblock rasterization strategies
- Improved performance for adverse problem sizes and data layouts
- Extended CUTLASS Core comonents
- Tensor views support arbitrary matrix and tensor layouts
- Zip iterators for structuring multiple data streams
- Enhanced CUTLASS utilities
- Reference code for tensor operations in host and device code
- Added HostMatrix<> for simplified matrix creation
- Examples
- Basic GEMM, tensor views, CUTLASS utilities, batched GEMM, WMMA GEMM
1.0.1 (2018-06-11)
- Intra-threadblock reduction added for small threadblock tile sizes
- sgemm_64x128x16, sgemm_128x128x16, sgemm_128x64x16, sgemm_128x32x16, sgemm_64x64x16, sgemm_64x32x16
- igemm_32x32x128
- GEMM K residue handled during prologue prior to mainloop
- Replaced Google Test copy with submodule. Use
git submodule init --recursive --update
1.0.0 (2018-05-16)
- Substantial rewrite to accommodate new architecture
- Kernels: SGEMM, DGEMM, IGEMM, HGEMM, WMMA GEMM
- Unit and performance tests
0.0.1 (2017-12-04)
- Initial release
Copyright (c) 2017 - 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. SPDX-License-Identifier: BSD-3-Clause
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.