forked from llvm/torch-mlir
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AutoBump] Merge with 94f54109 (Oct 09) (76) #446
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Support Bidirectional LSTM (utilising the forward LSTM layer with flipped Inputs and Outputs) - Support layout 1 - Support default cases for attr `clip` and `input_forget` - Support returning partial outputs (1-3) - fixes for alt_e2e_tests lstm tests (1,2,3)
# Description Implementation of the op for `torch.aten.unfold`: [TorchToLinalg Op Support #347](nod-ai/SHARK-ModelDev#849) Documentation of op can be found here: [PyTorch Docs](https://pytorch.org/docs/stable/generated/torch.Tensor.unfold.html) For this op, we apply a sliding window of some `size` along a single `dimension`, with `step` in between iterations. `Declaration: aten::unfold(Tensor(a) self, int dimension, int size, int step) -> Tensor(a)` The resulting `unfolded` tensor modifies the shape of `dimension` to be equal to the number of blocks that the sliding windows extracts/inserts, with an additional dimension of `size` appended (the number of cols of the output tensor directly translates from the size of the sliding window). So if we had a tensor of rank 3 (A x B x C), with dimension = 1, size = 2 and step = 2: (A x B x C) |=> (A x (B - size) // step + 1 x C x size) After extracting the window from the input tensor, we insert the (1 x size) slice into the output tensor. We can make this simpler by mapping the output indices from the input indices, like they do in the official implementation: [PyTorch Code](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/lowering.py#L1694)
…utility (llvm#3777) Signed-Off By: Vivek Khandelwal <[email protected]>
This patch adds two things: 1. support for folding scalar patterns like [1]---squeeze--->[] ---unsqueeze--->[1]. 2. a canonicalizer for aten.view that applies when we can statically or dynamically (through the scalarized view shapes) infer that it is a flatten or unflatten op in the last dim. I'm not sure if this is the right place to be adding such a view canonicalizer. Catastrophically, there is a decomposition from flatten and unflatten into aten.view. Until this gets deleted (and it definitely should be deleted), I felt like this would be an appropriate temporary home. We run scalarize shapes after lowering to the backend contract (i.e., decomposing), and scalarize shapes is required to be able to infer dynamic dims coming from size int ops.
This was preventing dynamic dims in an ONNX model from being reified (causing the generation of `tensor.cast`s and preventing fusion in iree): ```mlir %2 = torch.vtensor.literal(dense<[4, 256]> : tensor<2xsi64>) : !torch.vtensor<[2],si64>] %7 = torch.prim.ListConstruct %int2 : (!torch.int) -> !torch.list<int> %8 = torch.aten.reshape %2, %7 : !torch.vtensor<[2],si64>, !torch.list<int> -> !torch.vtensor<[2],si64> //... chain of foldable ops linking %2 to the `shape` operand of a `torch.aten.broadcast_to ... -> !torch.vtensor<[?,?],si64>` ```
cferry-AMD
approved these changes
Jan 10, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.