-
Notifications
You must be signed in to change notification settings - Fork 663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate llvm 1_20_2025 #19740
Integrate llvm 1_20_2025 #19740
Conversation
Signed-off-by: Nirvedh Meshram <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Signed-off-by: Nirvedh Meshram <[email protected]>
Signed-off-by: Nirvedh Meshram <[email protected]>
// CHECK: arith.maxnumf | ||
// CHECK: arith.maxnumf | ||
// CHECK: vector.broadcast %{{.*}} : f32 to vector<4xf32> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happened here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure, that maxnumf is not present in IR, overall a lot of reduction codegen seems changed. I wonder if its something with valuebound interface like this commit
llvm/llvm-project#122804
I will flag this on discord.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I think whats happening here can be explained by this commit
llvm/llvm-project#118952
I believe that we were using -INF in softmax decompostion when we wanted NAN, with NAN, I think there is some simplification as you would simply pick the other argument hence needing one less maxnumf after folding..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets check what happens to correctness with this... Might need to revert it. I missed it, but I dont think using NAN make sense. Should be -INF.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CI is not finding any errors, I believe there are softmax dispactches checked into the regression tests so I think this works. I think there is some discussion on this here
llvm/llvm-project#114595
NaN is safe with maxnumf but should we have used maximumf and kept using -INF would be something to decide. Also should maxnumf fold the same for -INF is also something to think over.
Signed-off-by: Nirvedh Meshram <[email protected]>
a9ad59b
to
d07a2ec
Compare
compiler/src/iree/compiler/Codegen/Common/DecomposePackUnPackOps.cpp
Outdated
Show resolved
Hide resolved
compiler/src/iree/compiler/Codegen/Common/TensorToVectorVectorizePad.cpp
Outdated
Show resolved
Hide resolved
compiler/src/iree/compiler/Codegen/Common/TensorToVectorVectorizePad.cpp
Show resolved
Hide resolved
compiler/src/iree/compiler/Codegen/SPIRV/SPIRVVectorizeLoadStore.cpp
Outdated
Show resolved
Hide resolved
Signed-off-by: Nirvedh Meshram <[email protected]>
compiler/src/iree/compiler/Codegen/Common/DecomposePackUnPackOps.cpp
Outdated
Show resolved
Hide resolved
compiler/src/iree/compiler/Codegen/Common/DecomposePackUnPackOps.cpp
Outdated
Show resolved
Hide resolved
Signed-off-by: Nirvedh Meshram <[email protected]>
@MaheshRavishankar @kuhar let me know if you think I should land this? the only not discussed topic are the SPIRV test changes, seems like some value bounds improvements to me in them, but I don't read those shuffle ops too well. |
If tests pass, land it |
drive by nits for PR/branch naming: Pushing to the shared repository with a branch name matching If you use megabump (https://github.com/iree-org/megabump), the branch name will be generated for you: https://github.com/iree-org/megabump/blob/37cbfaf83374f51d6d02528745711d9d7d515a69/scripts/start_integrate#L16 |
Reverts
RISC-V backend
Python related changes
NVPTX changes
MLIR API changes (this change is breaking something in HLO)
TOSA (tosa.tile operation is changed and torch-mlir needs to catch up)
Updates to Torch-MLIR
floating type changes to be upstreamed
https://github.com/iree-org/torch-mlir/tree/fix_forward_iree_llvm_integrate_2025
style change