Skip to content

Commit ce95739

Browse files
authored
Update bad links (NVIDIA#2080)
* fix broken links * revert repo.toml * linkchecker fixes * fix .cuh errors * lint
1 parent d4f928e commit ce95739

21 files changed

+47
-51
lines changed

cub/CONTRIBUTING.md

+1-6
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ changes. CUB's tests and examples can be built by configuring Thrust with the
1717
CMake option `THRUST_INCLUDE_CUB_CMAKE=ON`.
1818

1919
This process is described in more detail in Thrust's
20-
[CONTRIBUTING.md](https://nvidia.github.io/thrust/contributing.html).
20+
[CONTRIBUTING.md](https://nvidia.github.io/cccl/thrust/contributing.html).
2121

2222
The CMake options in the following section may be used to customize CUB's build
2323
process. Note that some of these are controlled by Thrust for compatibility and
@@ -63,8 +63,3 @@ The configuration options for CUB are:
6363
- Enable separable compilation on all targets that are agnostic of RDC.
6464
- Targets that explicitly require RDC to be enabled or disabled will ignore this setting.
6565
- Default is `OFF`.
66-
67-
# Development Model
68-
69-
CUB follows the same development model as Thrust, described
70-
[here](https://nvidia.github.io/thrust/releases/versioning.html).

cub/cub/block/block_discontinuity.cuh

+1-1
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828

2929
/**
3030
* @file
31-
* The cub::BlockDiscontinuity class provides [<em>collective</em>](index.html#sec0) methods for
31+
* The cub::BlockDiscontinuity class provides [<em>collective</em>](../index.html#sec0) methods for
3232
* flagging discontinuities within an ordered set of items partitioned across a CUDA thread block.
3333
*/
3434

cub/cub/block/block_histogram.cuh

+1-1
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828

2929
/**
3030
* @file
31-
* The cub::BlockHistogram class provides [<em>collective</em>](index.html#sec0) methods for
31+
* The cub::BlockHistogram class provides [<em>collective</em>](../index.html#sec0) methods for
3232
* constructing block-wide histograms from data samples partitioned across a CUDA thread block.
3333
*/
3434

cub/cub/block/block_merge_sort.cuh

+2-3
Original file line numberDiff line numberDiff line change
@@ -721,10 +721,9 @@ private:
721721
* `{ [0,1,2,3], [4,5,6,7], [8,9,10,11], ..., [508,509,510,511] }`.
722722
*
723723
* @par Re-using dynamically allocating shared memory
724-
* The following example under the examples/block folder illustrates usage of
724+
* The ``block/example_block_reduce_dyn_smem.cu`` example illustrates usage of
725725
* dynamically shared memory with BlockReduce and how to re-purpose
726-
* the same memory region:
727-
* <a href="../../examples/block/example_block_reduce_dyn_smem.cu">example_block_reduce_dyn_smem.cu</a>
726+
* the same memory region.
728727
*
729728
* This example can be easily adapted to the storage required by BlockMergeSort.
730729
*/

cub/cub/block/block_radix_sort.cuh

+6-8
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828

2929
/**
3030
* @file
31-
* The cub::BlockRadixSort class provides [<em>collective</em>](index.html#sec0) methods for radix
31+
* The cub::BlockRadixSort class provides [<em>collective</em>](../index.html#sec0) methods for radix
3232
* sorting of items partitioned across a CUDA thread block.
3333
*/
3434

@@ -142,7 +142,7 @@ CUB_NAMESPACE_BEGIN
142142
//! @blockcollective{BlockRadixSort}
143143
//!
144144
//! The code snippet below illustrates a sort of 512 integer keys that
145-
//! are partitioned in a [<em>blocked arrangement</em>](index.html#sec5sec3) across 128 threads
145+
//! are partitioned in a [<em>blocked arrangement</em>](../index.html#sec5sec3) across 128 threads
146146
//! where each thread owns 4 consecutive items.
147147
//!
148148
//! .. tab-set-code::
@@ -199,10 +199,8 @@ CUB_NAMESPACE_BEGIN
199199
//! Re-using dynamically allocating shared memory
200200
//! --------------------------------------------------
201201
//!
202-
//! The following example under the examples/block folder illustrates usage of
203-
//! dynamically shared memory with BlockReduce and how to re-purpose
204-
//! the same memory region:
205-
//! <a href="../../examples/block/example_block_reduce_dyn_smem.cu">example_block_reduce_dyn_smem.cu</a>
202+
//! The ``block/example_block_reduce_dyn_smem.cu`` example illustrates usage of dynamically shared memory with
203+
//! BlockReduce and how to re-purpose the same memory region.
206204
//!
207205
//! This example can be easily adapted to the storage required by BlockRadixSort.
208206
//! @endrst
@@ -986,7 +984,7 @@ public:
986984
//! +++++++
987985
//!
988986
//! The code snippet below illustrates a sort of 512 integer keys that
989-
//! are partitioned in a [<em>blocked arrangement</em>](index.html#sec5sec3) across 128 threads
987+
//! are partitioned in a [<em>blocked arrangement</em>](../index.html#sec5sec3) across 128 threads
990988
//! where each thread owns 4 consecutive keys.
991989
//!
992990
//! .. code-block:: c++
@@ -1590,7 +1588,7 @@ public:
15901588
//! +++++++
15911589
//!
15921590
//! The code snippet below illustrates a sort of 512 integer keys and values that
1593-
//! are initially partitioned in a [<em>blocked arrangement</em>](index.html#sec5sec3) across 128
1591+
//! are initially partitioned in a [<em>blocked arrangement</em>](../index.html#sec5sec3) across 128
15941592
//! threads where each thread owns 4 consecutive pairs. The final partitioning is striped.
15951593
//!
15961594
//! .. code-block:: c++

cub/cub/block/block_scan.cuh

+3-3
Original file line numberDiff line numberDiff line change
@@ -1011,7 +1011,7 @@ public:
10111011
//! +++++++
10121012
//!
10131013
//! The code snippet below illustrates an exclusive prefix max scan of 512 integer
1014-
//! items that are partitioned in a [<em>blocked arrangement</em>](index.html#sec5sec3)
1014+
//! items that are partitioned in a [<em>blocked arrangement</em>](../index.html#sec5sec3)
10151015
//! across 128 threads where each thread owns 4 consecutive items.
10161016
//!
10171017
//! .. code-block:: c++
@@ -2180,7 +2180,7 @@ public:
21802180
//! +++++++
21812181
//!
21822182
//! The code snippet below illustrates an inclusive prefix max scan of 512 integer items that
2183-
//! are partitioned in a [<em>blocked arrangement</em>](index.html#sec5sec3) across 128 threads
2183+
//! are partitioned in a [<em>blocked arrangement</em>](../index.html#sec5sec3) across 128 threads
21842184
//! where each thread owns 4 consecutive items.
21852185
//!
21862186
//! .. code-block:: c++
@@ -2314,7 +2314,7 @@ public:
23142314
//! +++++++
23152315
//!
23162316
//! The code snippet below illustrates an inclusive prefix max scan of 512 integer items that
2317-
//! are partitioned in a [<em>blocked arrangement</em>](index.html#sec5sec3) across 128 threads
2317+
//! are partitioned in a [<em>blocked arrangement</em>](../index.html#sec5sec3) across 128 threads
23182318
//! where each thread owns 4 consecutive items.
23192319
//!
23202320
//! .. code-block:: c++

cub/cub/device/device_spmv.cuh

+2-1
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,8 @@ CUB_NAMESPACE_BEGIN
6767
//!
6868
//! - ``A`` is an ``m * n`` sparse matrix whose non-zero structure is specified in
6969
//! `compressed-storage-row (CSR) format
70-
//! <http://en.wikipedia.org/wiki/Sparse_matrix#Compressed_row_Storage_.28CRS_or_CSR.29>`_ (i.e., three arrays:
70+
//! <https://en.wikipedia.org/wiki/Sparse_matrix#Compressed_sparse_row_(CSR,_CRS_or_Yale_format)>`_ (i.e., three
71+
//! arrays:
7172
//! ``values``, ``row_offsets``, and ``column_indices``)
7273
//! - ``x`` and ``y`` are dense vectors
7374
//!

cub/cub/warp/warp_exchange.cuh

+5-5
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727

2828
/**
2929
* @file
30-
* The cub::WarpExchange class provides [<em>collective</em>](index.html#sec0)
30+
* The cub::WarpExchange class provides [<em>collective</em>](../index.html#sec0)
3131
* methods for rearranging data partitioned across a CUDA warp.
3232
*/
3333

@@ -68,7 +68,7 @@ using InternalWarpExchangeImpl =
6868
} // namespace detail
6969

7070
/**
71-
* @brief The WarpExchange class provides [<em>collective</em>](index.html#sec0)
71+
* @brief The WarpExchange class provides [<em>collective</em>](../index.html#sec0)
7272
* methods for rearranging data partitioned across a CUDA warp.
7373
*
7474
* @tparam T
@@ -94,10 +94,10 @@ using InternalWarpExchangeImpl =
9494
* partitioning of items across threads (where consecutive items belong to a
9595
* single thread).
9696
* - WarpExchange supports the following types of data exchanges:
97-
* - Transposing between [<em>blocked</em>](index.html#sec5sec3) and
98-
* [<em>striped</em>](index.html#sec5sec3) arrangements
97+
* - Transposing between [<em>blocked</em>](../index.html#sec5sec3) and
98+
* [<em>striped</em>](../index.html#sec5sec3) arrangements
9999
* - Scattering ranked items to a
100-
* [<em>striped arrangement</em>](index.html#sec5sec3)
100+
* [<em>striped arrangement</em>](../index.html#sec5sec3)
101101
*
102102
* @par A Simple Example
103103
* @par

docs/cub/index.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -435,7 +435,7 @@ How is CUB different than Thrust and Modern GPU?
435435
CUB and Thrust
436436
--------------------------------------------------
437437

438-
CUB and `Thrust <http://thrust.github.io/>`_ share some
438+
CUB and `Thrust <https://nvidia.github.io/cccl/thrust/>`_ share some
439439
similarities in that they both provide similar device-wide primitives for CUDA.
440440
However, they target different abstraction layers for parallel computing.
441441
Thrust abstractions are agnostic of any particular parallel framework (e.g.,

docs/libcudacxx/extended_api/memory_access_properties/access_property.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -258,7 +258,7 @@ Mapping of access properties to NVVM-IR and the PTX ISA
258258

259259
When ``cuda::access_property`` is applied to memory operation, it
260260
sometimes matches with some of the cache eviction priorities and cache
261-
hints introduced in the `PTX ISA Version 7.4 <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#ptx-isa-version-7-4>`_.
261+
hints introduced in the `PTX ISA Version 7.4 <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#changes-in-ptx-isa-version-7-4>`_.
262262
See `Cache Eviction Priority Hints <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#cache-eviction-priority-hints>`_
263263

264264
- ``global``: ``evict_unchanged``

docs/libcudacxx/extended_api/memory_model.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ An atomic operation is atomic at the scope it specifies if:
7878
.. note::
7979
If `hostNativeAtomicSupported` is `0`, atomic load or store operations at system scope that affect a
8080
naturally-aligned 16-byte wide object in
81-
`unified memory <https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#unified-memory>`__ or
81+
`unified memory <https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-unified-memory-programming-hd>`__ or
8282
`mapped memory <https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#mapped-memory>`__ require system
8383
support. NVIDIA is not aware of any system that lacks this support and there is no CUDA API query available to
8484
detect such systems.

docs/libcudacxx/extended_api/synchronization_primitives.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ Synchronization Primitives
6161
primitive for constraining concurrent access
6262
- libcu++ 1.1.0 / CCCL 2.0.0 / CUDA 11.0
6363
* - :ref:`cuda::binary_semaphore <libcudacxx-extended-api-synchronization-counting-semaphore>`
64-
- System wide `std::binary_semaphore <https://en.cppreference.com/w/cpp/thread/binary_semaphore>`_
64+
- System wide `std::binary_semaphore <https://en.cppreference.com/w/cpp/thread/counting_semaphore>`_
6565
primitive for mutual exclusion
6666
- libcu++ 1.1.0 / CCCL 2.0.0 / CUDA 11.0
6767

docs/libcudacxx/ptx.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -480,17 +480,17 @@ Instructions by section
480480
- No
481481
* - `wmma.store <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-store-instruction-wmma-store>`__
482482
- No
483-
* - `wmma.mma <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-multiply-accumulate-instructions-wmma-mma>`__
483+
* - `wmma.mma <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-wmma-mma>`__
484484
- No
485-
* - `mma <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-multiply-accumulate-instructions-mma>`__
485+
* - `mma <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-mma>`__
486486
- No
487487
* - `ldmatrix <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-load-instruction-ldmatrix>`__
488488
- No
489489
* - `stmatrix <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-store-instruction-stmatrix>`__
490490
- No
491491
* - `movmatrix <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-transpose-instruction-movmatrix>`__
492492
- No
493-
* - `mma.sp <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#multiply-and-accumulate-instruction-mma-sp>`__
493+
* - `mma.sp <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-for-sparse-mma>`__
494494
- No
495495

496496
.. list-table:: `Asynchronous Warpgroup Level Matrix Multiply-Accumulate Instructions <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#asynchronous-warpgroup-level-matrix-multiply-accumulate-instructions>`__

docs/libcudacxx/releases.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
.. _libcudacxx-releases:
22

33
Releases
4-
============
4+
========
55

66
.. toctree::
77
:maxdepth: 1

docs/libcudacxx/releases/versioning.rst

+3-2
Original file line numberDiff line numberDiff line change
@@ -149,8 +149,9 @@ that the default ABI version may change in any release. A subset of
149149
older ABI versions can be used instead by defining
150150
``_LIBCUDACXX_CUDA_ABI_VERSION`` to the desired version.
151151

152-
For more information on specific ABI versions, please see the `releases
153-
section <../releases.md>`_ and `changelog <changelog.md>`_.
152+
For more information on specific ABI versions, please see the
153+
:ref:`release section <libcudacxx-releases>` and
154+
:ref:`changelog <libcudacxx-releases-changelog>`.
154155

155156
A program is ill-formed, no diagnostic required, if it uses two
156157
different translation units compiled with a different NVIDIA C++

docs/libcudacxx/standard_api/time_library.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ we use:
3434
- `GetSystemTimePreciseAsFileTime <https://docs.microsoft.com/en-us/windows/win32/api/sysinfoapi/nf-sysinfoapi-getsystemtimepreciseasfiletime>`_ and
3535
`GetSystemTimeAsFileTime <https://docs.microsoft.com/en-us/windows/win32/api/sysinfoapi/nf-sysinfoapi-getsystemtimeasfiletime>`_
3636
for host code on Windows.
37-
- `clock_gettime(CLOCK_REALTIME, ...) <https://linux.die.net/man/3/clock_gettime>`_ and `gettimeofday <https://linux.die.net/man/2/gettimeofday>`_
37+
- `clock_gettime(CLOCK_REALTIME, ...) <https://man7.org/linux/man-pages/man3/clock_gettime.3.html>`_ and `gettimeofday <https://man7.org/linux/man-pages/man2/gettimeofday.2.html>`_
3838
for host code on Linux, Android, and QNX.
3939
- `PTX's %globaltimer <https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#special-registers-globaltimer>`_ for device code.
4040

docs/thrust/cmake_options.rst

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
.. _cmake-options:
2+
13
CMake Options
24
=============
35

@@ -83,6 +85,8 @@ Single Config CMake Options
8385
- Selects the C++ standard dialect to use. Default is ``14``
8486
(C++14).
8587

88+
.. _cmake-multi-config-options:
89+
8690
Multi Config CMake Options
8791
--------------------------
8892

docs/thrust/releases/changelog.rst

+5-6
Original file line numberDiff line numberDiff line change
@@ -223,7 +223,7 @@ Thrust 1.17.0 is the final minor release of the 1.X series. This release
223223
provides GDB pretty-printers for device vectors/references, a new
224224
``unique_count`` algorithm, and an easier way to create tagged Thrust
225225
iterators. Several documentation fixes are included, which can be found
226-
on the new Thrust documentation site at https://nvidia.github.io/thrust.
226+
on the new Thrust documentation site at https://nvidia.github.io/cccl/thrust/.
227227
We’ll be migrating existing documentation sources to this new location
228228
over the next few months.
229229

@@ -255,8 +255,7 @@ Other Enhancements
255255

256256
- NVIDIA/thrust#1512: Use CUB to implement ``adjacent_difference``.
257257
- NVIDIA/thrust#1555: Use CUB to implement ``scan_by_key``.
258-
- NVIDIA/thrust#1611: Add new doxybook-based Thrust documentation at
259-
https://nvidia.github.io/thrust.
258+
- NVIDIA/thrust#1611: Add new doxybook-based Thrust documentation
260259
- NVIDIA/thrust#1639: Fixed broken link in documentation. Thanks to
261260
@jrhemstad for this contribution.
262261
- NVIDIA/thrust#1644: Increase contrast of search input text in new doc
@@ -792,15 +791,15 @@ New Features
792791
- NVIDIA/thrust#1159: CMake multi-config support, which allows multiple
793792
combinations of host and device systems to be built and tested at
794793
once. More details can be found here:
795-
https://github.com/NVIDIA/thrust/blob/main/CONTRIBUTING.md#multi-config-cmake-options
794+
:ref:`Multi Config CMake Options <cmake-multi-config-options>`
796795
- CMake refactoring:
797796

798797
- Added install targets to CMake builds.
799798
- Added support for CUB tests and examples.
800799
- Thrust can be added to another CMake project by calling
801800
``add_subdirectory`` with the Thrust source root (see
802801
NVIDIA/thrust#976). An example can be found here:
803-
https://github.com/NVIDIA/thrust/blob/main/examples/cmake/add_subdir/CMakeLists.txt
802+
https://github.com/NVIDIA/cccl/blob/main/thrust/examples/cmake/add_subdir/CMakeLists.txt
804803
- CMake < 3.15 is no longer supported.
805804
- Dialects are now configured through target properties. A new
806805
``THRUST_CPP_DIALECT`` option has been added for single config
@@ -831,7 +830,7 @@ Other Enhancements
831830
~~~~~~~~~~~~~~~~~~
832831

833832
- Contributor documentation:
834-
https://github.com/NVIDIA/thrust/blob/main/CONTRIBUTING.md
833+
https://github.com/NVIDIA/cccl/blob/main/CONTRIBUTING.md
835834
- Code of Conduct:
836835
https://github.com/NVIDIA/thrust/blob/main/CODE_OF_CONDUCT.md. Thanks
837836
to Conor Hoekstra for this contribution.

thrust/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ git clone --recursive https://github.com/NVIDIA/thrust.git
123123

124124
## Using Thrust From Your Project
125125

126-
For CMake-based projects, we provide a CMake package for use with `find_package`. See the [CMake README](https://github.com/NVIDIA/cccl/blob/main/docs/thrust/github_pages/setup/cmake_options.md) for more information.
126+
For CMake-based projects, we provide a CMake package for use with `find_package`. See :ref:`CMake Options <cmake-options>` for more information.
127127
Thrust can also be added via `add_subdirectory` or tools like the [CMake Package Manager](https://github.com/cpm-cmake/CPM.cmake).
128128

129129
For non-CMake projects, compile with:
@@ -188,7 +188,7 @@ Some parts are distributed under the [Apache License v2.0] and the [Boost Licens
188188

189189
[GitHub]: https://github.com/NVIDIA/cccl/tree/main/thrust
190190

191-
[contributing section]: https://nvidia.github.io/thrust/contributing.html
191+
[contributing section]: https://nvidia.github.io/cccl/thrust/contributing.html
192192

193193
[CMake build system]: https://cmake.org
194194

thrust/thrust/functional.h

+1-2
Original file line numberDiff line numberDiff line change
@@ -1249,8 +1249,7 @@ _CCCL_SUPPRESS_DEPRECATED_PUSH
12491249
* \param pred The Adaptable Binary Predicate to negate.
12501250
* \return A new object, <tt>npred</tt> such that <tt>npred(x,y)</tt> always returns
12511251
* the same value as <tt>!pred(x,y)</tt>.
1252-
* \tparam Binary Predicate is a model of <a
1253-
* href="https://en.cppreference.com/w/cpp/utility/functional/AdaptableBinaryPredicate">Adaptable Binary Predicate</a>.
1252+
* \tparam Binary Predicate is a model of an Adaptable Binary Predicate.
12541253
* \see binary_negate
12551254
* \see not1
12561255
*/

thrust/thrust/replace.h

+2-2
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ THRUST_NAMESPACE_BEGIN
5454
* \tparam DerivedPolicy The name of the derived execution policy.
5555
* \tparam ForwardIterator is a model of <a href="https://en.cppreference.com/w/cpp/iterator/forward_iterator">Forward
5656
* Iterator</a>, and \p ForwardIterator is mutable. \tparam T is a model of <a
57-
* href="https://en.cppreference.com/w/cpp/named_req/CopyAssignable>Assignable">Assignable</a>, \p T is a model of <a
57+
* href="https://en.cppreference.com/w/cpp/named_req/CopyAssignable">Assignable</a>, \p T is a model of <a
5858
* href="https://en.cppreference.com/w/cpp/concepts/equality_comparable">EqualityComparable</a>, objects of \p T may be
5959
* compared for equality with objects of \p ForwardIterator's \c value_type, and \p T is convertible to \p
6060
* ForwardIterator's \c value_type.
@@ -105,7 +105,7 @@ replace(const thrust::detail::execution_policy_base<DerivedPolicy>& exec,
105105
*
106106
* \tparam ForwardIterator is a model of <a href="https://en.cppreference.com/w/cpp/iterator/forward_iterator">Forward
107107
* Iterator</a>, and \p ForwardIterator is mutable. \tparam T is a model of <a
108-
* href="https://en.cppreference.com/w/cpp/named_req/CopyAssignable>Assignable">Assignable</a>, \p T is a model of <a
108+
* href="https://en.cppreference.com/w/cpp/named_req/CopyAssignable">Assignable</a>, \p T is a model of <a
109109
* href="https://en.cppreference.com/w/cpp/concepts/equality_comparable">EqualityComparable</a>, objects of \p T may be
110110
* compared for equality with objects of \p ForwardIterator's \c value_type, and \p T is convertible to \p
111111
* ForwardIterator's \c value_type.

0 commit comments

Comments
 (0)