forked from llvm/torch-mlir
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FXML-3548] Bump torch mlir #141
Merged
mgehre-amd
merged 219 commits into
feature/backport_ea1_ops
from
tina.FXML-3548-bump-torch-mlir-to-ff7f8b21dcc842a4f70209a6d255d54c4ef6e39b
Feb 2, 2024
Merged
[FXML-3548] Bump torch mlir #141
mgehre-amd
merged 219 commits into
feature/backport_ea1_ops
from
tina.FXML-3548-bump-torch-mlir-to-ff7f8b21dcc842a4f70209a6d255d54c4ef6e39b
Feb 2, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Set PyTorch and TorchVision version to nightly release 2023-05-16. This commit removes the test `BaddbmmDifferentDtypesModule_basic` since PyTorch expects all operands to have the same dtype. Ref: pytorch/pytorch@2abad0c Signed-Off By: Vivek Khandelwal <[email protected]>
- torch version: 2.1.0.dev20230615 - torch commit hash: 0d4f9aee900596cd8ed55725f75a5792b6df6de1 - torchvision version: 0.16.0.dev20230615 Co-authored-by: Roll PyTorch Action <[email protected]>
…op (llvm#2233) This commit adds the support for index.Tensor op when the index values are negative. This commit wraps around the index values by checking their values at run time. Signed-Off By: Vivek Khandelwal <[email protected]>
- torch version: 2.1.0.dev20230617 - torch commit hash: a522f9aedd9c9aaebba5997f201cc23119696578 - torchvision version: 0.16.0.dev20230617 Co-authored-by: Roll PyTorch Action <[email protected]>
- torch version: 2.1.0.dev20230618 - torch commit hash: 59c654a6ad8d256b89123dda536052e98cd5e399 - torchvision version: 0.16.0.dev20230618 Co-authored-by: Roll PyTorch Action <[email protected]>
- torch version: 2.1.0.dev20230619 - torch commit hash: 5beeb400ca3487d55629cbf8b87f9b637a7b657f - torchvision version: 0.16.0.dev20230619 Co-authored-by: Roll PyTorch Action <[email protected]>
This patch updates the submodules to: - llvm: 3f8d8c1 - mhlo: 4384a47b03dc377d651523037867899a340b0e96 The only change made is calling `registerAllExtensions` during dialect registration. See: https://reviews.llvm.org/D120368
-- This commit adds e2e support for aten.alias op. Signed-off-by: Abhishek Varma <[email protected]>
- torch version: 2.1.0.dev20230621 - torch commit hash: e4cf441a4ba770dc869433d876e73051ed9800b2 - torchvision version: 0.16.0.dev20230621 Co-authored-by: Roll PyTorch Action <[email protected]>
…m#2258) Single element tuples in Python need a comma after the element. However, the `registry.py` file, which generates the expected abstract interpretation function signatures, was not inserting the comma. This commit changes the expected signature generator to add a comma after the last element in any non-empty default tuple argument.
llvm#2256) * [Torch Dialect] avoid assertion failure when PrimNumToTensorScalarOp's input is torch.number * update
* [Torch Dialect] add missing one_hot dtype function * update * update * update
- torch version: 2.1.0.dev20230623 - torch commit hash: ad724c83fb0d94cb3bb2cec94e15d88023c64e0d - torchvision version: 0.16.0.dev20230623 Co-authored-by: Roll PyTorch Action <[email protected]>
- torch version: 2.1.0.dev20230624 - torch commit hash: 27b3861096b2e84d2e10cc823ba413967f82aafc - torchvision version: 0.16.0.dev20230624 Co-authored-by: Roll PyTorch Action <[email protected]>
- torch version: 2.1.0.dev20230625 - torch commit hash: 3bebfdfbabb134d20c3431d68219b54ad61ce172 - torchvision version: 0.16.0.dev20230625 Co-authored-by: Roll PyTorch Action <[email protected]>
- torch version: 2.1.0.dev20230626 - torch commit hash: 176a02ed90b218ffbf6a7b290ac28d37f06708ff - torchvision version: 0.16.0.dev20230626 Co-authored-by: Roll PyTorch Action <[email protected]>
Green LLVM commit: ec89cb9 Green MHLO commit: cd47c8c4db420181551e79c42fac22aecc4c06af
* [Torch Dialect] add folder for aten.add * update * update * update
* [Torch Dialect] Support aten.native_dropout * update
- torch version: 2.1.0.dev20230627 - torch commit hash: 43ec335ff295c55bd5d44a9fd03cfc884839a283 - torchvision version: 0.16.0.dev20230627 Co-authored-by: Roll PyTorch Action <[email protected]>
- torch version: 2.1.0.dev20230628 - torch commit hash: 94ca800459ebe8cd2bc3a9927a8412d958661634 - torchvision version: 0.16.0.dev20230628 Co-authored-by: Roll PyTorch Action <[email protected]>
Bump torch-mlir to ff7f8b2, and llvm to d13da15. For now, point llvm to the upstream commit, will change again after xilinx/llvm-project itself is bumped. Test failure in `Conversion/TorchToTosa/torch-backend-to-tosa-backend-pipeline.mlir` is expected, as it requires changes from the xilinx llvm-fork.
Fix torchdynamo test for IndexSelect by not letting it decompose.
Properly merge tests that work in tosa but not tosa_make_fx.
The decomposition is required for make_fx_tosa. However, it triggers a bug in torchdynamo leading to new test failures. Add the failing test to the fail set of torchdynamo.
Previously, these tests passed. However, the decomposition of `aten.index.TensorHackedTwin` was removed [0], now the tests also fail for make_fx_tosa. [0] llvm@60bad54#diff-bec402e56f7c88a9b22018c90df7e9ca721bdd097d4ffdd7c13d9d3f4c2e0688
Move `CumsumModule_basic` into the make_fx_tosa crashing set (was XFAIL before). The previous error was: 'tensor.empty' op incorrect number of dynamic sizes, has 3, expected 2 "tensor.empty"(%13, %15, %17) : (index, index, index) -> tensor<?x7x7x?xf32>
Upstream `SliceOutOfUpperBoundIndexStaticModule_basic` passes for TOSA. However, it should not, and our fork properly fails to lower, hence, the test is dropped from the pass set. Currently produced slice operator: ``` Legalizing operation : 'tosa.slice'(0x55f54e681990) { %10 = "tosa.slice"(%0) <{size = array<i64: 6, 4, 0>, start = array<i64: 0, 0, 7>}> : (tensor<6x4x7xf32>) -> tensor<6x4x0xf32> } -> SUCCESS : operation marked legal by the target ``` The output shape has dimensions of size zero, which is not allowed in TOSA.
Set PyTorch and TorchVision version to nightly release 2023-09-26. aten._convolution.deprecated changes done because upstream PyTorch has now added support for fp16 native convolution on CPU. Refer: pytorch/pytorch@7c90521 Signed-Off By: Vivek Khandelwal <[email protected]>
…h-mlir-to-ff7f8b21dcc842a4f70209a6d255d54c4ef6e39b
…h-mlir-to-ff7f8b21dcc842a4f70209a6d255d54c4ef6e39b
This reverts commit da585a4.
A new file with the requirements for the stable version of torch/torchvision was introduced recently, update them to the new versions in this bump.
TinaAMD
force-pushed
the
tina.FXML-3548-bump-torch-mlir-to-ff7f8b21dcc842a4f70209a6d255d54c4ef6e39b
branch
from
January 29, 2024 14:01
32f9559
to
81dae2a
Compare
Target llvm-project fused-ops again, as it now contains the llvm bump.
…L-3548-bump-torch-mlir-to-ff7f8b21dcc842a4f70209a6d255d54c4ef6e39b
mgehre-amd
approved these changes
Feb 2, 2024
mgehre-amd
deleted the
tina.FXML-3548-bump-torch-mlir-to-ff7f8b21dcc842a4f70209a6d255d54c4ef6e39b
branch
February 2, 2024 10:59
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Bump torch-mlir to ff7f8b2, and llvm to d13da15.