Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What's the best way to compare RArray and CuArray (and wrappers of them) #5

Open
glwagner opened this issue Jan 27, 2025 · 3 comments

Comments

@glwagner
Copy link
Collaborator

glwagner commented Jan 27, 2025

Here's a bit of set up code:

using Oceananigans
using Reactant

arch = GPU() # CPU() to run on CPU
Nx, Ny, Nz = (360, 120, 100) # number of cells
grid = LatitudeLongitudeGrid(arch, size=(Nx, Ny, Nz), halo=(7, 7, 7),
                             longitude=(0, 360), latitude=(-60, 60), z=(-1000, 0))
m = HydrostaticFreeSurfaceModel(; grid, momentum_advection=WENO())

uᵢ(x, y, z) = randn()
set!(m, u=uᵢ, v=uᵢ)

rm = Reactant.to_rarray(m)
ru = rm.velocities.u
u = m.velocities.u

Now I'll to compare some things. Comparisons of parent(u) (a CuArray) and and parent(ru) work

julia> maximum(parent(u))
5.342983823921992

julia> maximum(parent(ru))
ConcreteRNumber{Float64}(5.342983823921992)

julia> parent(ru) == parent(u)
true

julia> interior(ru) == interior(u)
true

However reductions of SubArray of RArray fail:

julia> typeof(interior(ru))
SubArray{Float64, 3, ConcreteRArray{Float64, 3}, Tuple{UnitRange{Int64}, UnitRange{Int64}, UnitRange{Int64}}, false}

julia> maximum(interior(ru))
ERROR: Scalar indexing is disallowed.
Invocation of getindex(::ConcreteRArray, ::Vararg{Int, N}) resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations *do not* execute on the GPU, but very slowly on the CPU,
and therefore should be avoided.

If you want to allow scalar iteration, use `allowscalar` or `@allowscalar`
to enable scalar iteration globally or for the operations in question.
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:35
  [2] errorscalar(op::String)
    @ GPUArraysCore ~/.julia/packages/GPUArraysCore/GMsgk/src/GPUArraysCore.jl:155
  [3] _assertscalar(op::String, behavior::GPUArraysCore.ScalarIndexing)
    @ GPUArraysCore ~/.julia/packages/GPUArraysCore/GMsgk/src/GPUArraysCore.jl:128
  [4] assertscalar(op::String)
    @ GPUArraysCore ~/.julia/packages/GPUArraysCore/GMsgk/src/GPUArraysCore.jl:116
  [5] getindex(::ConcreteRArray{Float64, 3}, ::Int64, ::Int64, ::Int64)
    @ Reactant ~/.julia/packages/Reactant/4U3lu/src/ConcreteRArray.jl:243
  [6] getindex
    @ ./subarray.jl:290 [inlined]
  [7] _getindex
    @ ./abstractarray.jl:1340 [inlined]
  [8] getindex
    @ ./abstractarray.jl:1290 [inlined]
  [9] iterate
    @ ./abstractarray.jl:1216 [inlined]
 [10] iterate
    @ ./abstractarray.jl:1214 [inlined]
 [11] _foldl_impl(op::Base.BottomRF{typeof(max)}, init::Base._InitialValue, itr::SubArray{Float64, 3, ConcreteRArray{Float64, 3}, Tuple{UnitRange{Int64}, UnitRange{Int64}, UnitRange{Int64}}, f
alse})
    @ Base ./reduce.jl:56
 [12] foldl_impl
    @ ./reduce.jl:48 [inlined]
 [13] mapfoldl_impl
    @ ./reduce.jl:44 [inlined]
 [14] mapfoldl
    @ ./reduce.jl:175 [inlined]
 [15] _mapreduce
    @ ./reduce.jl:453 [inlined]
 [16] _mapreduce_dim
    @ ./reducedim.jl:367 [inlined]
 [17] mapreduce
    @ ./reducedim.jl:359 [inlined]
 [18] _maximum
    @ ./reducedim.jl:1017 [inlined]
 [19] _maximum
    @ ./reducedim.jl:1016 [inlined]
 [20] maximum(a::SubArray{Float64, 3, ConcreteRArray{Float64, 3}, Tuple{UnitRange{Int64}, UnitRange{Int64}, UnitRange{Int64}}, false})
    @ Base ./reducedim.jl:1012

We also cannot reduce Field of RArray, possibly for the same reason:

julia> maximum(ru)
ERROR: GPU compilation of MethodInstance for (::GPUArrays.var"#map_kernel#38"{})(::CUDA.CuKernelContext, ::SubArray{…}, ::Base.Broadcast.Broadcasted{…}, ::Int64) failed
KernelError: passing and using non-bitstype argument
                                                                                                                                                                                             Argument 4 to your kernel function is of type Base.Broadcast.Broadcasted{Base.Broadcast.DefaultArrayStyle{3}, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}, Base.OneTo{Int64}}, typeof(identity
), Tuple{Base.Broadcast.Extruded{SubArray{Float64, 3, ConcreteRArray{Float64, 3}, Tuple{UnitRange{Int64}, UnitRange{Int64}, UnitRange{Int64}}, false}, Tuple{Bool, Bool, Bool}, Tuple{Int64, Int64, Int64}}}}, which is not isbits:                                                                                                                                                         .args is of type Tuple{Base.Broadcast.Extruded{SubArray{Float64, 3, ConcreteRArray{Float64, 3}, Tuple{UnitRange{Int64}, UnitRange{Int64}, UnitRange{Int64}}, false}, Tuple{Bool, Bool, Bool
}, Tuple{Int64, Int64, Int64}}} which is not isbits.                                                                                                                                             .1 is of type Base.Broadcast.Extruded{SubArray{Float64, 3, ConcreteRArray{Float64, 3}, Tuple{UnitRange{Int64}, UnitRange{Int64}, UnitRange{Int64}}, false}, Tuple{Bool, Bool, Bool}, Tuple{Int64, Int64, Int64}} which is not isbits.
      .x is of type SubArray{Float64, 3, ConcreteRArray{Float64, 3}, Tuple{UnitRange{Int64}, UnitRange{Int64}, UnitRange{Int64}}, false} which is not isbits.                                        .parent is of type ConcreteRArray{Float64, 3} which is not isbits.                                                                                                                             .data is of type Reactant.XLA.AsyncBuffer which is not isbits.
            .buffer is of type Reactant.XLA.Buffer which is not isbits.                                                                                                                                  .future is of type Union{Nothing, Reactant.XLA.Future} which is not isbits.
                                                                                                                                                                                             Stacktrace:
  [1] check_invocation(job::GPUCompiler.CompilerJob)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/2CW9L/src/validation.jl:92                                                                                                                     [2] macro expansion                                                                                                                                                                            @ ~/.julia/packages/GPUCompiler/2CW9L/src/driver.jl:92 [inlined]                                                                                                                           [3] macro expansion                                                                                                                                                                            @ ~/.julia/packages/TimerOutputs/6KVfH/src/TimerOutput.jl:253 [inlined]                                                                                                                    [4] codegen(output::Symbol, job::GPUCompiler.CompilerJob; toplevel::Bool, libraries::Bool, optimize::Bool, cleanup::Bool, validate::Bool, strip::Bool, only_entry::Bool, parent_job::Nothin
g)                                                                                                                                                                                               @ GPUCompiler ~/.julia/packages/GPUCompiler/2CW9L/src/driver.jl:90                                                                                                                         [5] codegen(output::Symbol, job::GPUCompiler.CompilerJob)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/2CW9L/src/driver.jl:82                                                                                                                         [6] compile(target::Symbol, job::GPUCompiler.CompilerJob; kwargs::@Kwargs{})                                                                                                                   @ GPUCompiler ~/.julia/packages/GPUCompiler/2CW9L/src/driver.jl:79
  [7] compile                                                                                                                                                                                    @ ~/.julia/packages/GPUCompiler/2CW9L/src/driver.jl:74 [inlined]                                                                                                                           [8] #1145
    @ ~/.julia/packages/CUDA/2kjXI/src/compiler/compilation.jl:250 [inlined]                                                                                                                   [9] JuliaContext(f::CUDA.var"#1145#1148"{GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}}; kwargs::@Kwargs{})                                                  @ GPUCompiler ~/.julia/packages/GPUCompiler/2CW9L/src/driver.jl:34
 [10] JuliaContext(f::Function)                                                                                                                                                                  @ GPUCompiler ~/.julia/packages/GPUCompiler/2CW9L/src/driver.jl:25                                                                                                                        [11] compile(job::GPUCompiler.CompilerJob)
    @ CUDA ~/.julia/packages/CUDA/2kjXI/src/compiler/compilation.jl:249                                                                                                                       [12] actual_compilation(cache::Dict{…}, src::Core.MethodInstance, world::UInt64, cfg::GPUCompiler.CompilerConfig{…}, compiler::typeof(CUDA.compile), linker::typeof(CUDA.link))                 @ GPUCompiler ~/.julia/packages/GPUCompiler/2CW9L/src/execution.jl:237
 [13] cached_compilation(cache::Dict{…}, src::Core.MethodInstance, cfg::GPUCompiler.CompilerConfig{…}, compiler::Function, linker::Function)                                                     @ GPUCompiler ~/.julia/packages/GPUCompiler/2CW9L/src/execution.jl:151                                                                                                                    [14] macro expansion
    @ ~/.julia/packages/CUDA/2kjXI/src/compiler/execution.jl:380 [inlined]                                                                                                                    [15] macro expansion
    @ ./lock.jl:267 [inlined]
 [16] cufunction(f::GPUArrays.var"#map_kernel#38"{Int64}, tt::Type{Tuple{CUDA.CuKernelContext, SubArray{…}, Base.Broadcast.Broadcasted{…}, Int64}}; kwargs::@Kwargs{})                           @ CUDA ~/.julia/packages/CUDA/2kjXI/src/compiler/execution.jl:375                                                                                                                         [17] cufunction
    @ ~/.julia/packages/CUDA/2kjXI/src/compiler/execution.jl:372 [inlined]                                                                                                                    [18] macro expansion
    @ ~/.julia/packages/CUDA/2kjXI/src/compiler/execution.jl:112 [inlined]
 [19] launch_heuristic(::CUDA.CuArrayBackend, ::GPUArrays.var"#map_kernel#38"{}, ::SubArray{…}, ::Base.Broadcast.Broadcasted{…}, ::Int64; elements::Int64, elements_per_thread::Int64)          @ CUDA ~/.julia/packages/CUDA/2kjXI/src/gpuarrays.jl:17                                                                                                                                   [20] launch_heuristic
    @ ~/.julia/packages/CUDA/2kjXI/src/gpuarrays.jl:15 [inlined]                                                                                                                              [21] map!(f::Function, dest::SubArray{Float64, 3, CUDA.CuArray{…}, Tuple{…}, false}, xs::SubArray{Float64, 3, ConcreteRArray{…}, Tuple{…}, false})
    @ GPUArrays ~/.julia/packages/GPUArrays/qt4ax/src/host/broadcast.jl:148

In the latter case, we do hit map! from CUDA.jl (like we would for maximum(u)).

@wsmoses
Copy link
Member

wsmoses commented Jan 27, 2025

@avik-pal

@wsmoses
Copy link
Member

wsmoses commented Jan 27, 2025

That said you can just do Array(some_rarray) and it'll copy to host which should be usable for whatever.

Though clearly we should add some nicer concretearray wrappers

@glwagner
Copy link
Collaborator Author

That said you can just do Array(some_rarray) and it'll copy to host which should be usable for whatever.

Though clearly we should add some nicer concretearray wrappers

Hmm yes that can work in a pinch. It's convenience that's the main issue I think; eg we use reductions commonly to assess simulation state while something is running, etc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants