Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump torch_mlir to Jan 4th #154

Closed
wants to merge 61 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
74f7a0c
Upstream the ONNX importer. (#2636)
stellaraccident Dec 13, 2023
7cf52ae
[Torch Dialect]Add Support for AtenGroupNormOp and AtenNativeGroupNor…
JianzheXiao Dec 13, 2023
ed4df38
[onnx] Add torch-mlir-import-onnx tool. (#2637)
stellaraccident Dec 13, 2023
42392bc
[MLIR][ONNX] Add OnnxToTorch support for matmul ops (#2629)
wu-s-john Dec 13, 2023
6ddeb1a
[torch] Add support for aten.selu (#2640)
JianzheXiao Dec 14, 2023
4857606
[onnx] Lowerings from `onnx.selu` (#2634)
rsuderman Dec 14, 2023
65f517b
Bump LLVM version to 762964e97fd66ab7728ecc92aa153a61266fa9df. (#2645)
godot73 Dec 14, 2023
4ec8b9f
[onnx] add support for onnx.LessOrEqual (#2639)
afalkenberg1 Dec 15, 2023
f59c01f
[MLIR][ONNX] Add OnnxToTorch support for q-z ops (specific ops in des…
saienduri Dec 15, 2023
d9f4a80
Bump LLVM version to fcd54b368e6713acd236dc47401b5292755900d7 (#2654)
qedawkins Dec 15, 2023
eb9249e
[ONNX][MLIR] Add support for LeakyRelu and GatherElements op (#2655)
Shukla-Gaurav Dec 15, 2023
55e9401
Implement lowering of aten.cosh op. (#2635)
godot73 Dec 15, 2023
061af69
[onnx] Lowering for `onnx.shape` to `torch` and `tensor` (#2648)
rsuderman Dec 15, 2023
030b014
[TorchToLinalg] Lower aten.cat to tensor.concat (#2650)
qedawkins Dec 15, 2023
705ea95
[onnx] Lowerings from `onnx.transpose` (#2641)
rsuderman Dec 15, 2023
b3e9420
Bump LLVM version to aa165edca8545b212de084d5b18c3d30347f774a (#2658)
rsuderman Dec 16, 2023
6188869
[onnx] Add support for `onnx.sinh` (#2643)
rsuderman Dec 16, 2023
cee8563
[onnx] Support of onnx.Greater, onnx.Less, onnx.GreaterOrEqual to Tor…
afalkenberg1 Dec 16, 2023
ae1a6e4
[onnx] Lower `onnx.Gemm` to `torch` (#2663)
rsuderman Dec 16, 2023
9c655d0
[Bazel] Add conversion targets for `TorchToTensor` (#2666)
sjain-stanford Dec 17, 2023
791c666
[torch] Lower `torch.aten.sinh` to `linalg` (#2662)
rsuderman Dec 18, 2023
deacb8e
[MLIR][ONNX] Add OnnxToTorch support for Gelu (#2647)
wu-s-john Dec 18, 2023
698ff3a
[MLIR][ONNX] Add OnnxToTorch support for Reduction Ops (#2657)
saienduri Dec 18, 2023
8649b84
[MLIR][ONNX] Add OnnxToTorch support for AveragePool op. (#2672)
vivekkhandelwal1 Dec 19, 2023
89cfbe8
Update PYTHONPATH in development.md (#2644)
yinrun Dec 19, 2023
ebaab42
[ONNX] ONNX -> TORCH for Erf (#2673)
afalkenberg1 Dec 19, 2023
be3e74b
Integrate llvm/llvm-project@282d50147628 (2023-12-19) (#2675)
hanhanW Dec 19, 2023
869c258
Integrate llvm/llvm-project@99045b60b575 to fix bazel build. (#2677)
hanhanW Dec 20, 2023
20ab882
Fix typo in DecomposeBernoulli() match failure messages. (#2676)
godot73 Dec 20, 2023
8fa81d1
Tweak development.md for more speed (#2667)
rikhuijzer Dec 20, 2023
a24aadb
[aten] Make `torch.aten.matmul` to `linalg` work for non-broadcasting…
rsuderman Dec 20, 2023
11cc92d
[onnx] Lowerings from `onnx.tan` (#2642)
rsuderman Dec 20, 2023
8328998
Allow printing all IR in `torch_mlir.compile` (#2669)
rikhuijzer Dec 20, 2023
d75cff6
NFC: Remove unused variable causing a warning.
stellaraccident Dec 21, 2023
3226241
[MLIR][ONNX] Add OnnxToTorch support for Conv and ConvTranspose op.
vivekkhandelwal1 Dec 19, 2023
779a141
Mentioned helpful tooling to convert Onnx models to Torch MLIR (#2683)
wu-s-john Dec 21, 2023
46f2cb5
[onnx] Lower onnx.HardSigmoid to torch (#2682)
wu-s-john Dec 21, 2023
ccd469c
[fx] Upstream the turbine FxImporter to torch-mlir. (#2681)
stellaraccident Dec 21, 2023
85b86b3
[onnx] Fix importer variable names to make `mlir` legal (#2690)
rsuderman Dec 22, 2023
9a72c65
[MLIR][ONNX] Add OnnxToTorch support for BatchNormalization and Conca…
vivekkhandelwal1 Dec 21, 2023
0849fd0
[MLIR][ONNX] Fix onnx.conv lowering to handle bias tensor
vivekkhandelwal1 Dec 22, 2023
ee75e8d
[MLIR][ONNX] Add OnnxToTorch support for Reshape Op (#2698)
saienduri Dec 26, 2023
4f252c8
[MLIR][ONNX] Add OnnxToTorch support for GlobalAveragePool op. (#2692)
vivekkhandelwal1 Dec 26, 2023
abc6b0a
onnx to torch pow support (#2656)
aldesilv Dec 27, 2023
6847fc1
Fix since-opset too high (#2701)
renxida Dec 27, 2023
336cfb6
OnnxToTorch support for onnx.Mul op (#2699)
aldesilv Dec 27, 2023
2d796b7
lower onnx max op to torch aten maximum op (#2618)
aldesilv Dec 27, 2023
1b40b63
[onnx] Add torch-mlir-import-onnx native port as an optional tool/lib…
stellaraccident Dec 27, 2023
d560698
Lower `onnx.split` to `torch.aten` (#2686)
renxida Dec 28, 2023
8e389ff
Implement lowering of torch.aten.exponential (#2680)
godot73 Dec 28, 2023
9fc212e
support Onnx opset 1-13 ReduceMean where axes is supplied as an attr …
renxida Dec 28, 2023
6660a26
lower torch.aten.isinf to linalg (#2638)
renxida Dec 29, 2023
9adad9b
Add support for reflection_pad1d (#2706)
kumardeepakamd Jan 2, 2024
80bd093
Added tensorResultTypeAtIndex to Patterns.h
afalkenberg1 Dec 29, 2023
690827f
build: manually update PyTorch version
vivekkhandelwal1 Jan 2, 2024
1778314
add basic cumsum. this doesn't support the exclusive and reverse attr…
renxida Jan 3, 2024
3e9bacd
[torch-mlir] update e2e test class documentation (#2722)
aartbik Jan 4, 2024
4e5e34d
[MLIR][ONNX] Add OnnxToTorch support for Slice Op (#2696)
wu-s-john Jan 4, 2024
aa7e95f
[torch-mlir] remove trailing whitespace from e2e test files (#2727)
aartbik Jan 4, 2024
271e9bd
Merge commit 'aa7e95f' (Jan 4th) into matthias.bump_torch_mlir4
mgehre-amd Mar 11, 2024
3c483c0
Merge branch 'matthias.bump_torch_mlir3' into matthias.bump_torch_mlir4
mgehre-amd Mar 11, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 21 additions & 18 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ project(torch-mlir LANGUAGES CXX C)
set(CMAKE_C_STANDARD 11)
set(CMAKE_CXX_STANDARD 17)

include(CMakeDependentOption)

#-------------------------------------------------------------------------------
# Project options
#-------------------------------------------------------------------------------
Expand All @@ -43,24 +45,13 @@ endif()

option(TORCH_MLIR_OUT_OF_TREE_BUILD "Specifies an out of tree build" OFF)

# PT1 options.
option(TORCH_MLIR_ENABLE_PROJECT_PT1 "Enables the PyTorch1 project under projects/pt1" OFF)
# TODO: Rename/scope these. They use historic names for now to ease migration
# burden.
option(TORCH_MLIR_ENABLE_JIT_IR_IMPORTER "Enables JIT IR Importer" ON)
option(TORCH_MLIR_ENABLE_LTC "Enables LTC backend" OFF)
option(TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS "Build Torch dialect MLIR Python bindings but neither JIT IR Importer nor LTC backend" OFF)
if(TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS)
set(TORCH_MLIR_ENABLE_JIT_IR_IMPORTER OFF)
set(TORCH_MLIR_ENABLE_LTC OFF)
endif()
# Force enable the PT1 project if either the JIT_IR_IMPORTER or LTC is enabled.
if(NOT TORCH_MLIR_ENABLE_PROJECT_PT1)
if(TORCH_MLIR_ENABLE_JIT_IR_IMPORTER OR TORCH_MLIR_ENABLE_LTC)
message(STATUS "Enabling projects/pt1 because features requiring it are enabled")
set(TORCH_MLIR_ENABLE_PROJECT_PT1 ON)
endif()
endif()
# PyTorch native extension gate. If OFF, then no features which depend on
# native extensions will be built.
option(TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS "Enables PyTorch native extension features" ON)
cmake_dependent_option(TORCH_MLIR_ENABLE_JIT_IR_IMPORTER "Enables JIT IR Importer" ON TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS OFF)
cmake_dependent_option(TORCH_MLIR_ENABLE_LTC "Enables LTC backend" OFF TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS OFF)

option(TORCH_MLIR_ENABLE_ONNX_C_IMPORTER "Enables the ONNX C importer" OFF)

#-------------------------------------------------------------------------------
# Configure out-of-tree vs in-tree build
Expand Down Expand Up @@ -235,4 +226,16 @@ endif()
# Sub-projects
#-------------------------------------------------------------------------------

# Sub-projects can bundle additional PyTorch extensions by adding them to this
# source target. It is typically empty unless if features are enabled.
if(MLIR_ENABLE_BINDINGS_PYTHON)
declare_mlir_python_sources(TorchMLIRPythonTorchExtensionsSources)
endif()

# Build projects first as it may populate additional Python deps.
add_subdirectory(projects)

# Finish with top-level Python bindings so it can handle additional deps.
if(MLIR_ENABLE_BINDINGS_PYTHON)
add_subdirectory(python)
endif()
2 changes: 1 addition & 1 deletion build_tools/python_deploy/build_linux_packages.sh
Original file line number Diff line number Diff line change
Expand Up @@ -351,14 +351,14 @@ function setup_venv() {
echo ":::: Using stable dependencies"
python3 -m pip install --no-cache-dir -r /main_checkout/torch-mlir/stable-requirements.txt
python3 -m pip install --no-cache-dir -r /main_checkout/torch-mlir/build-requirements.txt
python3 -m pip install --no-cache-dir -r /main_checkout/torch-mlir/test-requirements.txt
;;
*)
echo "Unrecognized torch version '$torch_version'"
exit 1
;;
esac

python3 -m pip install --no-cache-dir -r /main_checkout/torch-mlir/test-requirements.txt
}

function build_out_of_tree() {
Expand Down
2 changes: 1 addition & 1 deletion build_tools/write_env_file.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ portable_realpath() {

td="$(portable_realpath "$(dirname "$0")"/..)"
build_dir="$(portable_realpath "${TORCH_MLIR_BUILD_DIR:-$td/build}")"
python_packages_dir="$build_dir/python_packages"
python_packages_dir="$build_dir/tools/torch-mlir/python_packages"

write_env_file() {
echo "Updating $build_dir/.env file"
Expand Down
11 changes: 7 additions & 4 deletions docs/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,12 @@
```shell
git clone https://github.com/llvm/torch-mlir
cd torch-mlir
git submodule update --init
git submodule update --init --progress
```

Optionally, use `--depth=1` to make a shallow clone of the submodules.
While this is running, you can already setup the Python venv and dependencies in the next step.

## Setup your Python VirtualEnvironment and Dependencies

Also, ensure that you have the appropriate `python-dev` package installed
Expand Down Expand Up @@ -109,13 +112,13 @@ cmake --build build
### Linux and macOS

```shell
export PYTHONPATH=`pwd`/build/python_packages/torch_mlir:`pwd`/projects/pt1/examples
export PYTHONPATH=`pwd`/build/tools/torch-mlir/python_packages/torch_mlir:`pwd`/projects/pt1/examples
```

### Windows PowerShell

```shell
$env:PYTHONPATH = "$PWD/build/python_packages/torch_mlir;$PWD/projects/pt1/examples"
$env:PYTHONPATH = "$PWD/build/tools/torch-mlir/python_packages/torch_mlir;$PWD/projects/pt1/examples"
```

## Testing MLIR output in various dialects
Expand All @@ -126,7 +129,7 @@ Make sure you have activated the virtualenv and set the `PYTHONPATH` above
(if running on Windows, modify the environment variable as shown above):
```shell
source mlir_venv/bin/activate
export PYTHONPATH=`pwd`/build/tpython_packages/torch_mlir:`pwd`/projects/pt1/examples
export PYTHONPATH=`pwd`/build/tools/torch-mlir/python_packages/torch_mlir:`pwd`/projects/pt1/examples
python projects/pt1/examples/torchscript_resnet18_all_output_types.py
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,8 @@
We enable the direct representation of many ONNX features directly in
the `torch` dialect as `torch.operator` custom ops with names like
`onnx.{OperatorName}`. The majority of ONNX operators are represented
with a systematic transformation. See
[onnx_importer.py](https://github.com/nod-ai/SHARK-Turbine/blob/main/python/shark_turbine/importers/onnx_importer.py)
for the reference importer which complies with the rules below
(this is planned to be upstreamed to torch-mlir proper in the near
future).
with a systematic transformation. `torch_mlir.extras.onnx_importer`
for the reference importer which complies with the rules below.

## Adding new ONNX operators

Expand All @@ -26,17 +23,30 @@ are relatively straight-forward to map, following this general procedure:
* Open the corresponding implementation file `DefaultDomainXtoY.cpp`
corresponding with the alphabetic sort of the op and add a conversion.
* Generate successful test cases:
* Either run the Turbine importer to produce MLIR output for all
ops/models in the ONNX test suite or use a dump that someone has
generated:
* [2023-Nov-21](https://drive.google.com/file/d/1P6QaRXGnCeApjdjNmykLxWa-yqMmIO-d/view?usp=sharing)
* All `onnx_importer.py` tests are dumped to the test temp dir (success
or failure). This is typically located under
`tools/torch-mlir/test/python/onnx_importer/Output`. The `.mlir` files
under there should provide good variants to drive lit test coverage of
conversion.
* (Optionally) If there is an Onnx file that uses the op of interest,
convert that file to Onnx MLIR form using the following Python command,
`python -m torch_mlir.tools.import_onnx my_model.onnx`.
* There are often many variants of tests for checking conformance of
different historic ONNX encodings, but these are often not load bearing
at the MLIR level.
* Pick a handful of test cases and add them to
`test/Conversion/TorchOnnxToTorch/simple_ops_x_to_y.mlir` corresponding to an
alphabetic breakdown. At this time, ignore tests that are not exercising
`test/Conversion/TorchOnnxToTorch/simple_ops_x_to_y.mlir` corresponding to
an alphabetic breakdown. At this time, ignore tests that are not exercising
useful differences in the pattern implementations.
* (Optionally) Use `torch-mlir-opt` to validate the outputs of the new op.
First, build the project using
`cmake --build build --target tools/torch-mlir/all`. This will generate
the conversion binary, `torch-mlir-opt`. Then call `torch-mlir-opt` with
the MLIR pass `convert-torch-onnx-to-torch`:
```
build/bin/torch-mlir-opt -convert-torch-onnx-to-torch \
-split-input-file [DESIRED_ONNX_FILE].mlir
```
* Generate failure test cases:
* Some ops have forms that do not (easily) map to torch-mlir. If you leave
an op under-implemented, add a failing test case to
Expand Down
9 changes: 9 additions & 0 deletions include/torch-mlir/Conversion/Passes.td
Original file line number Diff line number Diff line change
Expand Up @@ -105,6 +105,15 @@ def ConvertTorchToLinalg : Pass<"convert-torch-to-linalg", "func::FuncOp"> {
let constructor = "mlir::torch::createConvertTorchToLinalgPass()";
}

def ConvertTorchToTensor : Pass<"convert-torch-to-tensor", "func::FuncOp"> {
let summary = "Convert Torch ops to the Tensor dialect";
let description = [{
Converts any `Torch` operators that were expressible as `Tensor` dialect
operations.
}];
let constructor = "mlir::torch::createConvertTorchToTensorPass()";
}

def ConvertTorchToTosa : Pass<"convert-torch-to-tosa", "func::FuncOp"> {
let summary = "Convert Torch ops to TOSA ops";
let description = [{
Expand Down
79 changes: 78 additions & 1 deletion include/torch-mlir/Conversion/TorchOnnxToTorch/Patterns.h
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ struct OpBinder {

Location getLoc() { return op->getLoc(); }

int getNumOperands() { return op->getNumOperands(); }

// Operand matches of different arities.
ParseResult tensorOperand(Value &value0) {
if (op->getNumOperands() != 1)
Expand All @@ -54,6 +56,20 @@ struct OpBinder {
return success();
}

ParseResult tensorOperands(SmallVector<Value> &valueList,
int64_t numOperands) {
if (op->getNumOperands() != numOperands)
return failure();
for (int i = 0; i < numOperands; i++) {
Value curr = op->getOperand(i);
if (!toValidTensorType(curr.getType())) {
return failure();
}
valueList.push_back(curr);
}
return success();
}

ParseResult tensorOperandAtIndex(Value &valueIdx, int64_t idx) {
if (idx >= op->getNumOperands())
return failure();
Expand All @@ -62,6 +78,13 @@ struct OpBinder {
return failure();
return success();
}

ParseResult tensorOperandsList( llvm::SmallVectorImpl<Value> &values) {
for (int i = 0; i < op->getNumOperands(); i++) {
values.push_back(op->getOperand(i));
}
return success();
}

// Result type matchers of different arities.
ParseResult tensorResultType(Torch::ValueTensorType &type0) {
Expand All @@ -74,6 +97,16 @@ struct OpBinder {
return success();
}

ParseResult tensorResultTypeAtIndex(Torch::ValueTensorType &typeIdx, int64_t idx) {
if (idx >= op->getNumResults())
return failure();
auto t = toValidTensorType(op->getResult(idx).getType());
if (!t)
return failure();
typeIdx = t;
return success();
}

// Attribute accessors.
ParseResult s64BoolAttr(bool &value, StringRef nameSuffix,
bool defaultValue = false) {
Expand Down Expand Up @@ -113,8 +146,52 @@ struct OpBinder {
return failure();
}

ParseResult f32FloatAttr(float &value, StringRef nameSuffix,
float defaultValue = 0.0f) {
SmallString<64> name("torch.onnx.");
name.append(nameSuffix);
auto attr = op->getAttr(name);
if (!attr) {
value = defaultValue;
return success();
}
if (auto floatAttr = dyn_cast<FloatAttr>(attr)) {
FloatType t = cast<FloatType>(floatAttr.getType());
if (t.getWidth() != 32)
return failure();
value = floatAttr.getValue().convertToFloat();
return success();
}
return failure();
}

ParseResult s64IntegerArrayAttr(llvm::SmallVector<int64_t> &values,
StringRef nameSuffix,
ArrayRef<int64_t> defaults) {
SmallString<64> name("torch.onnx.");
name.append(nameSuffix);
auto attr = op->getAttr(name);
if (!attr) {
values.append(defaults.begin(), defaults.end());
return success();
}
if (auto arrayAttr = dyn_cast<ArrayAttr>(attr)) {
for (auto element : arrayAttr) {
auto integerAttr = element.dyn_cast<IntegerAttr>();
if (!integerAttr)
return failure();
IntegerType t = cast<IntegerType>(integerAttr.getType());
if (!t.isSigned() || t.getWidth() != 64)
return failure();
values.push_back(integerAttr.getSInt());
}
return success();
}
return failure();
}

ParseResult customOpNameStringAttr(std::string &value, StringRef nameSuffix,
std::string defaultValue = "") {
std::string defaultValue = "") {
SmallString<64> name("torch.onnx.");
name.append(nameSuffix);
auto attr = op->getAttr(name);
Expand Down
23 changes: 23 additions & 0 deletions include/torch-mlir/Conversion/TorchToTensor/TorchToTensor.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
//===------------------------------------------------------------*- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
// Also available under a BSD-style license. See LICENSE.
//
//===----------------------------------------------------------------------===//

#ifndef TORCHMLIR_CONVERSION_TORCHTOTENSOR_TORCHTOTENSOR_H
#define TORCHMLIR_CONVERSION_TORCHTOTENSOR_TORCHTOTENSOR_H

#include "mlir/Dialect/Func/IR/FuncOps.h"
#include "mlir/Pass/Pass.h"
#include <memory>

namespace mlir {
namespace torch {
std::unique_ptr<OperationPass<func::FuncOp>> createConvertTorchToTensorPass();
} // namespace torch
} // namespace mlir

#endif // TORCHMLIR_CONVERSION_TORCHTOTENSOR_TORCHTOTENSOR_H
Loading
Loading