Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate LLVM at 6d38dbf6eb56fd2b3399565af455de96a99ffa0f #4103

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

vivekkhandelwal1
Copy link
Collaborator

No description provided.

justin-ngo-arm and others added 2 commits March 24, 2025 19:04
1: [TOSA] Update rescale input_/output_zp and double_round attribute

* Update tosa.rescale input_/output_zp as inputs according to TOSA 1.0
* Update double_round bool attribute to rounding_mode in alignment with
TOSA 1.0. rounding_mode supports "SINGLE_ROUND", "INEXACT_ROUND", and
"DOUBLE_ROUND". Existing double_round behaviours are mapped as followed:
  - double_round = true -> rounding_mode = "DOUBLE_ROUND"
  - double_round = false -> rounding_mode = "SINGLE_ROUND"

2: [TOSA] Update tosa.negate's zero-points to inputs

Update LIT tests and XFAIL sets
@vivekkhandelwal1
Copy link
Collaborator Author

This PR can't be merged until the issue is resolved as described here: #3504 (comment)

@zjgarvey
Copy link
Collaborator

I'm wondering if someone wrote a conversion from reshape to expand shape? I'll take a look to triage but might not have time to upstream a fix.

@zjgarvey
Copy link
Collaborator

The crash occurs because of an expand shape op that gets created in the pattern FoldWithProducerReshapeOpByExpansion inside the pass --linalg-fuse-elementwise-ops (not related to #3504). The actual crash occurs when trying to fold the dim ops of the expanded tensor later on in the pass.

Because @MaheshRavishankar allowed accessing the output shape for tensor.expand_shape in llvm/llvm-project@092372d, we should be able to improve the pattern https://github.com/llvm/llvm-project/blob/1a7af2a90fb043b25cbba15ce941ebfdba0e6717/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp#L1989 to fetch the dynamic dims.

@justin-ngo-arm
Copy link
Contributor

Hi @vivekkhandelwal1 and @zjgarvey , is there a resolution for this blocking issue yet? Some of our works depend on these TOSA 1.0 updates aligned upstream, and it seems like this blocking issue is not related to TOSA. Can we maybe do an integration with an LLVM hash that doesn't include the breaking issue (with all TOSA updates included, still)?

cc: @sjarus

@MaheshRavishankar
Copy link
Contributor

The crash occurs because of an expand shape op that gets created in the pattern FoldWithProducerReshapeOpByExpansion inside the pass --linalg-fuse-elementwise-ops (not related to #3504). The actual crash occurs when trying to fold the dim ops of the expanded tensor later on in the pass.

Because @MaheshRavishankar allowed accessing the output shape for tensor.expand_shape in llvm/llvm-project@092372d, we should be able to improve the pattern https://github.com/llvm/llvm-project/blob/1a7af2a90fb043b25cbba15ce941ebfdba0e6717/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp#L1989 to fetch the dynamic dims.

I just saw these patterns. We should just drop those FoldDimOfExpandShape and FoldDimOfCollapseShape patterns from canonicalizers.

@vivekkhandelwal1
Copy link
Collaborator Author

The crash occurs because of an expand shape op that gets created in the pattern FoldWithProducerReshapeOpByExpansion inside the pass --linalg-fuse-elementwise-ops (not related to #3504). The actual crash occurs when trying to fold the dim ops of the expanded tensor later on in the pass.
Because @MaheshRavishankar allowed accessing the output shape for tensor.expand_shape in llvm/llvm-project@092372d, we should be able to improve the pattern https://github.com/llvm/llvm-project/blob/1a7af2a90fb043b25cbba15ce941ebfdba0e6717/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp#L1989 to fetch the dynamic dims.

I just saw these patterns. We should just drop those FoldDimOfExpandShape and FoldDimOfCollapseShape patterns from canonicalizers.

Hi @MaheshRavishankar, can you please review this: llvm/llvm-project#134219?

@vivekkhandelwal1
Copy link
Collaborator Author

Hi @vivekkhandelwal1 and @zjgarvey , is there a resolution for this blocking issue yet? Some of our works depend on these TOSA 1.0 updates aligned upstream, and it seems like this blocking issue is not related to TOSA. Can we maybe do an integration with an LLVM hash that doesn't include the breaking issue (with all TOSA updates included, still)?

cc: @sjarus

Hi @justin-ngo-arm, we should wait for the changes to be merged in LLVM and then we can do a more latest bump.

@justin-ngo-arm
Copy link
Contributor

Hi @vivekkhandelwal1 and @zjgarvey , is there a resolution for this blocking issue yet? Some of our works depend on these TOSA 1.0 updates aligned upstream, and it seems like this blocking issue is not related to TOSA. Can we maybe do an integration with an LLVM hash that doesn't include the breaking issue (with all TOSA updates included, still)?

cc: @sjarus

Hi @justin-ngo-arm, we should wait for the changes to be merged in LLVM and then we can do a more latest bump.

Sounds good to me. Thank you for helping with this!

@sjarus
Copy link
Collaborator

sjarus commented Apr 3, 2025

Thanks a lot, @MaheshRavishankar and @vivekkhandelwal1 !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants