-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: correct handling of wrapped arrays functionalities #342
Conversation
b2d9a9f
to
4a3c633
Compare
a4feced
to
9f4b014
Compare
4a3c633
to
1363f52
Compare
9f4b014
to
1ed663e
Compare
1363f52
to
57a3192
Compare
b8cd2d1
to
f923616
Compare
f621b15
to
e6955e7
Compare
Reactant.MLIR.Dialects.stablehlo.maximum, T.(x), pdims; init=typemin(T) | ||
).mlir_data | ||
res = reduce_window( | ||
Reactant.MLIR.Dialects.stablehlo.maximum, T.(x), pdims; init=typemin(T) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this can already be replaced by Ops.maximum
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The first argument here is being used directly with block arguments. It would be an unnecessary in-direction to go down the Ops route here
it fixes my code! using using Tenet, Reactant, Adapt
julia> a = Tensor([1 0; 1 0], [:j, :i])
2×2 Tensor{Int64, 2, Matrix{Int64}}:
1 0
1 0
julia> b = Tensor([0 0; 1 1], [:i, :j])
2×2 Tensor{Int64, 2, Matrix{Int64}}:
0 0
1 1
julia> a + b # this is correct
2×2 Tensor{Int64, 2, Matrix{Int64}}:
1 1
1 1
julia> parent(a) + parent(b) # this is wrong
2×2 Matrix{Int64}:
1 0
2 1 before this PR julia> ar = adapt(ConcreteRArray, a)
2×2 Tensor{Int64, 2, ConcreteRArray{Int64, 2}}:
1 0
1 0
julia> br = adapt(ConcreteRArray, b)
2×2 Tensor{Int64, 2, ConcreteRArray{Int64, 2}}:
0 0
1 1
julia> @jit ar + br # it's taking the parent array, not transposed
2×2 Tensor{Int64, 2, ConcreteRArray{Int64, 2}}:
1 0
2 1 after this PR julia> @jit ar + br
2×2 Tensor{Int64, 2, ConcreteRArray{Int64, 2}}:
1 1
1 1 |
101ab7a
to
e128132
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
everythings fine but would you mind adding some more tests for Ops.reshape
? we probably missed the bug you're talking about
res = MLIR.IR.result( | ||
MLIR.Dialects.stablehlo.dynamic_gather( | ||
get_mlir_data(y), idxs, slice_sizes; dimension_numbers | ||
), | ||
1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ahhh i didn't think about using dynamic_gather
in this way. i feared about using dynamic_slice
for each diagonal element. nice!
Co-authored-by: Sergio Sánchez Ramírez <[email protected]>
e128132
to
2e891ac
Compare
The only additional CI failure will be resolved by #362 |
fixes #339
The main trick here is to preserve
ReshapedArray
type. Applies the fix for the following types for the time being:Adds the following LinearAlgebra functionality
diag
diagm
Updated NNlibExt to handle wrappers correctly