Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AutoBump] Merge with 41d04a89 (Jun 12) (64) #301

Merged
merged 13 commits into from
Sep 11, 2024

Conversation

mgehre-amd
Copy link
Collaborator

No description provided.

rsuderman and others added 13 commits June 7, 2024 13:58
Missing types for tracing float8 types.
Linalg conversion requires mapping for f8 types
* Let `toBuiltinTensor()` reflects the original dtype of
`!torch.vtensor`.
* Backend handles dtype conversion themselves.
This commit adds the lowering for SequenceAt, SequenceEmpty,
SequenceInsert, SequenceErase op

Signed-Off By: Vivek Khandelwal<[email protected]>
Tests the basic constructs of registering a custom op and its abstract
implementations (with FakeTensors) in python, going through TorchDynamo
export, followed by importing the shape expressions in the Torch
dialect.

Also fixes the importer were previously the symbolic bind op insertion
was not gated in one place.
…lvm#3443)

This patch fixes several failing tests in our [external test
suite](https://github.com/nod-ai/SHARK-TestSuite/tree/main/iree_tests/onnx/node/generated),
and addresses some of the issues discussed in llvm#3420
There is currently no int16 quantization support in torch. This patch
adds a new mlir type to correspond to the missing "torch.qint16" type,
and enables lowering of quantization-related onnx ops using int16 types.

In follow-up patches, custom quantization logic for ops like
aten.matmul/aten.mm/aten.convolution may need to be revisited to allow
support for qint16. The passes in FuseQuantizedOps.cpp may also need
slight modifications.
…lvm#3445)

Adds the following arguments:
- "--clear-domain": enabling this flag (default False) will delete the
domain attribute from each node in the onnx model before importing.
Shape inference does not seem to work for onnx ops in custom domains. In
the rare case when these ops have a corresponding counterpart in base
onnx, enabling this flag might allow shape inference to work properly.
- "--opset-version": allows setting the opset version manually. This
will cause the importer to attempt to update the opset_version of the
onnx model before importing. Newer opset versions sometimes have more
robust shape inference patterns.
Handles onnx exporters emitting default-valued attributes.

Signed-off-by: Suraj Sudhir <[email protected]>
@mgehre-amd mgehre-amd changed the title [AutoBump] Merge with 41d04a89 (Jun 12) (2) [AutoBump] Merge with 41d04a89 (Jun 12) (64) Sep 9, 2024
Base automatically changed from bump_to_f794582b to feature/backport_ea1_ops September 11, 2024 12:08
@mgehre-amd mgehre-amd merged commit 5f167e7 into feature/backport_ea1_ops Sep 11, 2024
4 checks passed
@mgehre-amd mgehre-amd deleted the bump_to_41d04a89 branch September 11, 2024 12:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants