forked from llvm/torch-mlir
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump torch_mlir to Jan 4th #154
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This is part 1 of 2, which will also include upstreaming the FX importer. I started with ONNX because it forces some project layout updates and is more self contained/easier as a first step. Deviating somewhat from the RFCs on project layout, I made the following decisions: * Locating the `onnx_importer.py` into `torch_mlir.extras` as Maks already has opened up that namespace and it seemed to fit. Better to have fewer things at that level. * Setup the build so that the root project only contains MLIR Python and pure Python deps (like the importers), but this can be augmented with the `projects/` adding more depending on which features are enabled. * The default build continues to build everything whereas in `TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS=1` mode, it builds a `torch-mlir-core` wheel with the pure contents only. `onnx_importer.py` and `importer_smoke_test.py` are almost verbatim copies from SHARK-Turbine. I made some minor local alterations to adapt to paths and generalize the way they interact with the outer project. I expect I can copy these back to Turbine verbatim from here. I also updated the license boilerplate (they have the same license but slightly different project norms for the headers) but retained the correct copyright. Other updates: * Added the ONNX importer unit test (which also can generate test data) in lit, conditioned on the availability of the Python `onnx` package. In a followup once I know everything is stable, I'll add another env var that the CI can set to always enable this so we know conclusively if tests pass. * Moved the ONNX conversion readme to `docs/`. * Renamed CMake option `TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS` -> `TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS` and inverted the sense. Made the JitIR importer and LTC options `cmake_dependent_options` for robustness.
…mOp (llvm#2591) Co-authored-by: LiuYuanqiang <[email protected]>
Simple Python console script to import an ONNX protobuf to the torch dialect for additional processing. For installed wheels, this can be used with something like: ``` torch-mlir-import-onnx test/python/onnx_importer/LeakyReLU.onnx ``` Or from a dev setup: ``` python -m torch_mlir.tools.import_onnx ... ```
This commit adds the OnnxToTorch support for Matmul.
Add `aten.selu` operation to `torch` dialect.
Added the less or equal operation to OnnxToTorch. onnx.LessOrEqual --------- Co-authored-by: root <[email protected]>
…cription) (llvm#2601) This commit adds the OnnxToTorch support for Reciprocal, Round, ScatterElements, Sigmoid, Sin, Tanh, Sqrt, Sub, Sum, Where, Xor, Squeeze, Unsqueeze ops. For reviewers, the ops that weren't trivial and probably require extra review are Sum, Squeeze, and Unsqueeze.
This commit adds support for `LeakyRelu and GatherElements` op in the onnx pipeline. Signed-off-by: Gaurav Shukla <[email protected]>
Includes the lowering from the `aten` equivalent to `tensor` operations.
This replaces the lowering of aten.cat with tensor.concat, allowing more efficient handling of concatenations in downstream flows. The refbackend populates concat decomposition patterns that can be used to recover the previous lowering.
Lowerings for `transpose` from ONNX to `aten`. Implementation depends on making multiple `aten.transpose` operations swapping pairs of dimensions. As `onnx.transpose` can swap around any dimensions it may require constructing multiple `aten.transpose`.
…ch (llvm#2649) The three remaining compare operations onnx.Greater onnx.Less onnx.GreaterOrEqual Are also added with this push request. This concludes a set of basic tensor compare functions.
This commit adds the OnnxToTorch support for Gelu op. --------- Co-authored-by: Rob Suderman <[email protected]>
This commit adds the OnnxToTorch support for ReduceSum, ReduceMean, and ReduceMin ops.
This commit adds the OnnxToTorch support for AveragePool op. Signed-Off By: [email protected]
Modify PYTHONPATH to new related directory in docs.
TorchOnnxToTorch For Erf function
llvm@be3e74b breaks bazel in post-submit. The revision bumps it to include the bazel fix.
Adding the `--progress` flag shows the same output as what `git clone` would show. This is very nice for slow connections. Without it, the command may run for many minutes without providing any indication that it is still doing something. For `--depth=1`, I think it should be safe as most people have new enough git versions nowadays, but let's be safe and make it an optional suggestion. I ran all the tests fine with `--depth=1`, but I don't know whether things will keep working when the submodules get updated for systems with old git versions.
This commit adds the OnnxToTorch support for Conv and ConvTranspose op. Signed-Off By: [email protected]
…2683) - Going through the `#torch-mlir` channel on the `llvm` discord, I realize that there are some useful commands that would be extremely helpful in creating Onnx lowers to Torch MLIR. Seems a lot of people are contributing to this. So, I thought it would be good to add this information to the docs. These tools helped streamlined the development of this PR: llvm#2682
The expression for HardSigmoid in Onnx (https://onnx.ai/onnx/operators/onnx__HardSigmoid.html): max(0, min(1, alpha * x + beta)) is inherently different from HardSigmoid in Torch (https://pytorch.org/docs/stable/generated/torch.nn.Hardsigmoid.html) which is: if x < -3 -> 0 elif x > 3 -> 1 else x/6 + 1/2 That being said, it was just better to compute out the entire expression when translating the Onnx expression to Torch mlir, which is done in this PR. Some of the logic is shared from the files in `DecomposeComplexOps`. Therefore, refactored some shared logic between `DecomposeComplexOps` and `DefaultDomainGToP` and put it in a `Utils` file.
Changes made during upstreaming: * Removed comments attributing some copied code back to torch-mlir (since it is now repatriated). * Re-organized imports. * Inlined RefMapping/RefTracker and TypeSubclassMap from an external utility module. * Added FxImporter class comments. * Updated stack trace extraction to be fail safe. * Added an entry-point for `import_frozen_exported_program` which uses the shiny new upstream `torch.export.export()` API (versus the lower-level/older API that Turbine is presently using). This necessitated a small FX rewrite to line external state management up with current conventions. * Adapted one of Turbine's importer tests to go with this initial submission. Turbine unfortunately has a lot of more-integration-ey tests, and I would like to extract those as more of unit tests of the importer features and upstream them that way vs trying to copy directly. For now, one overall test with the initial submission gets us moving. I acknowledge that there are some code quality things that could be improved in this submission: this was authored over the course of many months (and often via some trial and error). I would like to keep it relatively converged with the downstream for the next few steps while getting the test suite upstreamed. And then it will be easier to take a hygienic pass through the code. Including co-authors for contributors in the git log of the original repository. Co-authored-by: Ean Garvey <[email protected]> Co-authored-by: Avinash Sharma <[email protected]> Co-authored-by: Arham Khan <[email protected]> Co-authored-by: brucekimrokcmu <[email protected]> Co-authored-by: saienduri <[email protected]>
…t op. This commit adds the OnnxToTorch support for BatchNormalization and Concat op. Signed-Off By: [email protected]
Signed-Off By: Vivek Khandelwal <[email protected]>
This commit adds the OnnxToTorch support for Reshape op.
…2692) This commit adds the OnnxToTorch support for GlobalAveragePool op. Signed-Off By: [email protected]
lower onnx min op to torch aten minimum op
…rary. (llvm#2694) As noted in the plan when this work started, we need to produce an ORT EP plugin for a downstream project, and this will necessitate a C-based ONNX importer (as opposed to the existing Python one). Because this comes with dependencies that we do not want to impart on various projects, this is optional in torch-mlir. It is also factored so that it can be used as standalone sources in downstreams that need it. Since it only depends on public C APIs on the MLIR side, this will make build coupling a lot better (since a C++ dep is not needed on the compiler and it is trivial to dynamically load). Our original plan was just to maintain this fork off to the side in our ORT plugin, but once work started, it seemed better to write it clean and contribute it upstream for anyone to use. We expect that for non-ORT use, the Python importer will have better ergonomics for most folks. I will follow-up with a test suite refactor so that we can drive the Python or C importer. This is a relatively mechanical port from Python to C, borrowing some scaffolding from the old JitIR importer. It does attempt to lay some groundwork for external data, which will need to be implemented on the Python side as well.
llvm#2646 Decompose aten.exponential() into: -exp(1-x)/lambda
Co-authored-by: Rob Suderman <[email protected]>
Adds a lowering to Linalg for reflection_pad1d. Based on ideas/code from draft PR llvm#2693. --------- Co-authored-by: Kumar Deepak <[email protected]>
Need this for LayerNorm
Set PyTorch and TorchVision version to nightly release 2024-01-02. Signed-Off By: Vivek Khandelwal <[email protected]>
The doc seems copy-and-paste from the linalg-on-tensors class
Requires LLVM Dec 15: commit aa165ed Author: Rob Suderman <[email protected]> Date: Fri Dec 15 11:35:40 2023 -0800 [mlir][math] Added `math.sinh` with expansions to `math.exp` (#75517)
Base automatically changed from
matthias.bump_torch_mlir3
to
feature/backport_ea1_ops
March 13, 2024 09:44
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Requires LLVM Dec 15 (Xilinx/llvm-project#128)