Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

@threaded is not always working on macOS ARM #1206

Closed
efaulhaber opened this issue Aug 12, 2022 · 2 comments
Closed

@threaded is not always working on macOS ARM #1206

efaulhaber opened this issue Aug 12, 2022 · 2 comments
Labels
bug Something isn't working parallelization Related to MPI, threading, tasks etc. upstream

Comments

@efaulhaber
Copy link
Member

Examples like tree_1d_dgsem/elixir_euler_shockcapturing.jl and some others do not work in macOS ARM builds of Julia.
They fail with this error (stack trace truncated):

ERROR: cfunction: closures are not supported on this platform
Stacktrace:
  [1] macro expansion
    @ ~/.julia/packages/Polyester/BI7zH/src/batch.jl:25 [inlined]
  [2] batch_closure
    @ ~/.julia/packages/Polyester/BI7zH/src/batch.jl:18 [inlined]
  [3] macro expansion
    @ ~/.julia/packages/Polyester/BI7zH/src/batch.jl:127 [inlined]
  [4] _batch_no_reserve
    @ ~/.julia/packages/Polyester/BI7zH/src/batch.jl:111 [inlined]
  [5] batch
    @ ~/.julia/packages/Polyester/BI7zH/src/batch.jl:318 [inlined]
  [6] macro expansion
    @ ~/.julia/packages/Polyester/BI7zH/src/closure.jl:373 [inlined]
  [7] macro expansion
    @ ~/git/Trixi.jl/src/auxiliary/auxiliary.jl:188 [inlined]
  [8] (::IndicatorHennemannGassner{Float64, typeof(density_pressure), NamedTuple{(:alpha, :alpha_tmp, :indicator_threaded, :modal_threaded), Tuple{Vector{Float64}, Vector{Float64}, Vector{Vector{Float64}}, Vector{Vector{Float64}}}}})(u::StrideArraysCore.PtrArray{Tuple{Static.StaticInt{3}, Static.StaticInt{4}, Int64}, (true, true, true), Float64, 3, 1, 0, (1, 2, 3), Tuple{Static.StaticInt{8}, Static.StaticInt{24}, Static.StaticInt{96}}, Tuple{Static.StaticInt{1}, Static.StaticInt{1}, Static.StaticInt{1}}}, mesh::TreeMesh{1, Trixi.SerialTree{1}}, equations::CompressibleEulerEquations1D{Float64}, dg::DGSEM{LobattoLegendreBasis{Float64, 4, SVector{4, Float64}, Matrix{Float64}, Matrix{Float64}, Matrix{Float64}}, Trixi.LobattoLegendreMortarL2{Float64, 4, Matrix{Float64}, Matrix{Float64}}, SurfaceIntegralWeakForm{FluxLaxFriedrichs{typeof(max_abs_speed_naive)}}, VolumeIntegralShockCapturingHG{typeof(flux_shima_etal), FluxLaxFriedrichs{typeof(max_abs_speed_naive)}, IndicatorHennemannGassner{Float64, typeof(density_pressure), NamedTuple{(:alpha, :alpha_tmp, :indicator_threaded, :modal_threaded), Tuple{Vector{Float64}, Vector{Float64}, Vector{Vector{Float64}}, Vector{Vector{Float64}}}}}}}, cache::NamedTuple{(:elements, :interfaces, :boundaries, :element_ids_dg, :element_ids_dgfv, :fstar1_L_threaded, :fstar1_R_threaded), Tuple{Trixi.ElementContainer1D{Float64, Float64}, Trixi.InterfaceContainer1D{Float64}, Trixi.BoundaryContainer1D{Float64, Float64}, Vector{Int64}, Vector{Int64}, Vector{Matrix{Float64}}, Vector{Matrix{Float64}}}}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Trixi ~/git/Trixi.jl/src/solvers/dgsem_tree/indicators_1d.jl:44
  [9] IndicatorHennemannGassner
    @ ~/git/Trixi.jl/src/solvers/dgsem_tree/indicators_1d.jl:27 [inlined]

More details here:
JuliaSIMD/Polyester.jl#88

If we want to support macOS ARM, we could either try to investigate the upstream problem and try to fix it there or we could extract all code inside this particular @threaded loop to a new function, which avoids using a closure.
However, I have no idea when a cfunction closure is used and when we can get away without one. Some @threaded loops work perfectly fine without extracting a function.

@efaulhaber efaulhaber added bug Something isn't working parallelization Related to MPI, threading, tasks etc. upstream labels Aug 12, 2022
@ranocha
Copy link
Member

ranocha commented Aug 14, 2022

I think trying to debug the @batch problems as discussed in the upstream issue is the better option. I would only consider rewriting our code if we're really not able to find anything.

@efaulhaber
Copy link
Member Author

Fixed by #1462?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working parallelization Related to MPI, threading, tasks etc. upstream
Projects
None yet
Development

No branches or pull requests

2 participants