forked from llvm/torch-mlir
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AutoBump] Merge with 41d04a89 (Jun 12) (64) #301
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Missing types for tracing float8 types.
Linalg conversion requires mapping for f8 types
* Let `toBuiltinTensor()` reflects the original dtype of `!torch.vtensor`. * Backend handles dtype conversion themselves.
This commit adds the lowering for SequenceAt, SequenceEmpty, SequenceInsert, SequenceErase op Signed-Off By: Vivek Khandelwal<[email protected]>
Signed-Off By: Vivek Khandelwal <[email protected]>
Tests the basic constructs of registering a custom op and its abstract implementations (with FakeTensors) in python, going through TorchDynamo export, followed by importing the shape expressions in the Torch dialect. Also fixes the importer were previously the symbolic bind op insertion was not gated in one place.
this fixes the following issue: llvm#3418
half_pixel is also the default mode used by ONNX, see https://onnx.ai/onnx/operators/onnx__Resize.html
…lvm#3443) This patch fixes several failing tests in our [external test suite](https://github.com/nod-ai/SHARK-TestSuite/tree/main/iree_tests/onnx/node/generated), and addresses some of the issues discussed in llvm#3420
There is currently no int16 quantization support in torch. This patch adds a new mlir type to correspond to the missing "torch.qint16" type, and enables lowering of quantization-related onnx ops using int16 types. In follow-up patches, custom quantization logic for ops like aten.matmul/aten.mm/aten.convolution may need to be revisited to allow support for qint16. The passes in FuseQuantizedOps.cpp may also need slight modifications.
…lvm#3445) Adds the following arguments: - "--clear-domain": enabling this flag (default False) will delete the domain attribute from each node in the onnx model before importing. Shape inference does not seem to work for onnx ops in custom domains. In the rare case when these ops have a corresponding counterpart in base onnx, enabling this flag might allow shape inference to work properly. - "--opset-version": allows setting the opset version manually. This will cause the importer to attempt to update the opset_version of the onnx model before importing. Newer opset versions sometimes have more robust shape inference patterns.
Handles onnx exporters emitting default-valued attributes. Signed-off-by: Suraj Sudhir <[email protected]>
mgehre-amd
changed the title
[AutoBump] Merge with 41d04a89 (Jun 12) (2)
[AutoBump] Merge with 41d04a89 (Jun 12) (64)
Sep 9, 2024
Base automatically changed from
bump_to_f794582b
to
feature/backport_ea1_ops
September 11, 2024 12:08
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.