forked from llvm/torch-mlir
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AutoBump] Merge with fixes of 297c2709 (May 20) (43) #276
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The old lowering only had logic for 2d (i.e. images). this patch allows interpolation for n spatial dims, which is required for some 3d vision models such as - onnx/models/pytorch-3dunet_vaiq_int8 which successfully compiles and runs with this patch.
This commit fixes the onnx.MaxPool op lowering which was lacking the indices result support. Signed-Off By: Vivek Khandelwal <[email protected]>
ensures stability of results between different set ups
…tend (llvm#3376) Discord Thread: https://discord.com/channels/636084430946959380/1238330633328005243 ## Context: [This](https://github.com/llvm/torch-mlir/blob/main/python/torch_mlir/fx.py#L61) was updated to support e2e tests for the TorchDynamo frontend in Torch-MLIR, where we run FX decompositions and import the FX IR to generate Torch dialect, followed by `torch-function-to-torch-backend-pipeline`, skipping only the shape/type refinement for now. However, we should be able to skip many of the torch simplification passes, as depicted in the [frontend roadmap](https://github.com/llvm/torch-mlir/blob/main/docs/images/roadmap_frontend.png). Based on IREE's TorchDynamo [pipeline](https://github.com/iree-org/iree/blob/main/compiler/plugins/input/Torch/InputConversion/Passes.cpp#L29), the only two passes we seem to require are: `ReduceOpVariantsPass` and `DecomposeComplexOpsPass`. This is inline with our findings as well based on initial exploration. This PR creates a dedicated frontend simplification pipeline for TorchDynamo / FX Importer which calls only `ReduceOpVariantsPass` and `DecomposeComplexOpsPass`. We rely on the e2e fx_importer tests to ensure we're not regressing by removing many of the passes that were historically needed for TorchScript. One notable change here is that we do not call the `LowerToBackendContractPass` anymore, which used to call `TorchSimplificationPipeline` iteratively until VerifyBackendContract was clean. Some of this was required for the shape/type refinement to converge, which seems a non-issue for Dynamo frontend. Do we anticipate this (the iterative invocation of TorchSimplificationPipeline followed by VerifyBackendContract) to be worth retaining in the Dynamo frontend pipeline? If so, I can make those changes, PLMK.
This commit fixes the bugs for the `onnx.OneHot` operator by: 1) Converting negative indices to non-negative indices 2) Handling both `int` and `float` types for `off` and `on` values 3) Using the correct result type It also includes a new unit test.
…uOp (llvm#3330) I am trying to eliminate 'getWithLeastStaticInformation' in DecomposeAtenTriuOp. Could you provide me with some suggestions? @qingyunqu @zjgarvey See issue llvm#3312
* bump llvm to 1e5f29a * bump stablehlo to c44d9af8d4879adccf1054cb61a53377ae5898cb
This PR fixes the bugs for `Torch::AtenOneHotOp` by: 1) Using `Torch::kUnknownSize` as the default value for `numClasses` in the pattern matching stage in `DecomposeAtenOneHotOp` 2) Adding `AtenIntScalarOp` to the patterns in `TorchToArith` 3) Handling both `int` and `float` types for `off` and `on` values in `TorchOnnxToTorch` conversion It also includes: 1) A new test in `TorchToArith/basic.mlir`, for `torch.aten.Int.Scalar`, and 2) A new test in `decompose-complex-ops.mlir`, for `torch.aten.one_hot` **Dependencies** This PR is dependent on llvm#3334.
std::accumulate needs 64-bit init value to perform 64-bit arithmetic on a list of integers. Signed-off-by: Gaurav Shukla <[email protected]>
Contributing towards llvm#3299
so that `python3 e2e_testing/main.py -v` would print intermediate IR.
* support aten.linear with more rank.
Co-authored-by: zjgarvey <[email protected]>
Signed-Off By: Vivek Khandelwal <[email protected]>
Updates: - some unsupported modes are now going to report a match failure for unsupported coordinate transformation modes. - fixes a bug that was introduced in the last patch for resize (my bad...) - uses actual x and y coordinates for computing weights in bilinear interpolation (rather than eps modified values) - slightly simplifies the bilinear interpolation payload for readability and performance - passes coordinate transformation mode information from an onnx.Resize op to the mode string for the aten._interpolate op. This allows us to perform custom logic in the torch->linalg lowering to support onnx.Resize options without losing the default behaviors of the interpolate op.
llvm#3401) …lit.sections * support recompose to aten.split.with_sizes and aten.tensor_split.sections * fix recompose of aten.chunk
Base automatically changed from
bump_to_99511cef
to
feature/backport_ea1_ops
September 9, 2024 07:29
[AutoBump] Merge with 4d7cdba (May 22) (44)
[AutoBump] Merge with fixes of f4bfe3f (May 22) (45)
[AutoBump] Merge with 05929f9 (May 27) (46)
[AutoBump] Merge with fixes of e0a5adb (May 27) (47)
[AutoBump] Merge with d7b8f00 (May 30) (48)
[AutoBump] Merge with fixes of 074098d (May 31) (49)
[AutoBump] Merge with fixes of 4e05e2c (May 31) (50)
cferry-AMD
approved these changes
Sep 9, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.