Skip to content

Commit

Permalink
Merge commit '087fea06' into bump_to_087fea06
Browse files Browse the repository at this point in the history
  • Loading branch information
mgehre-amd committed Aug 20, 2024
2 parents 981c977 + 087fea0 commit 5bc4220
Show file tree
Hide file tree
Showing 287 changed files with 20,899 additions and 12,455 deletions.
26 changes: 0 additions & 26 deletions .github/workflows/lint.yml

This file was deleted.

15 changes: 15 additions & 0 deletions .github/workflows/pre-commit-all.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
name: pre-commit (all files on push)

on:
push:
branches: [main, post-commit-test]

jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v3
- uses: pre-commit/[email protected]
with:
extra_args: --color=always --all-files
17 changes: 17 additions & 0 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
name: pre-commit

on:
pull_request:

jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
# requites to grab the history of the PR
fetch-depth: 0
- uses: actions/setup-python@v3
- uses: pre-commit/[email protected]
with:
extra_args: --color=always --from-ref ${{ github.event.pull_request.base.sha }} --to-ref ${{ github.event.pull_request.head.sha }}
2 changes: 1 addition & 1 deletion .github/workflows/releaseSnapshotPackage.yml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ jobs:
uses: ad-m/[email protected]
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: ${{ env.BRANCH_NAME }}
branch: ${{ env.BRANCH_NAME }}
tags: true

- name: Create Release
Expand Down
21 changes: 21 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
exclude: "GeneratedTorchOps\\.td|abstract_interp_lib_gen\\.py|\\.excalidraw|\\.ipynb"
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-ast
- id: check-yaml
- id: check-added-large-files
- repo: https://github.com/psf/black
rev: 24.4.2
hooks:
- id: black

- repo: https://github.com/pre-commit/mirrors-clang-format
rev: 'v18.1.4'
hooks:
- id: clang-format
2 changes: 0 additions & 2 deletions .style.yapf

This file was deleted.

2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -247,4 +247,4 @@ add_subdirectory(projects)
# Finish with top-level Python bindings so it can handle additional deps.
if(MLIR_ENABLE_BINDINGS_PYTHON)
add_subdirectory(python)
endif()
endif()
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# The Torch-MLIR Project
# The Torch-MLIR Project

The Torch-MLIR project aims to provide first class compiler support from the [PyTorch](https://pytorch.org) ecosystem to the MLIR ecosystem.

Expand All @@ -8,15 +8,15 @@ necessarily a reflection of the completeness or stability of the code, it
does indicate that the project is not yet endorsed as a component of LLVM.

[PyTorch](https://pytorch.org)
PyTorch is an open source machine learning framework that facilitates the seamless transition from research and prototyping to production-level deployment.
PyTorch is an open source machine learning framework that facilitates the seamless transition from research and prototyping to production-level deployment.

[MLIR](https://mlir.llvm.org)
The MLIR project offers a novel approach for building extensible and reusable compiler architectures, which address the issue of software fragmentation, reduce the cost of developing domain-specific compilers, improve compilation for heterogeneous hardware, and promote compatibility between existing compilers.

[Torch-MLIR](https://github.com/llvm/torch-mlir)
Several vendors have adopted MLIR as the middle layer in their systems, enabling them to map frameworks such as PyTorch, JAX, and TensorFlow into MLIR and subsequently lower them to their target hardware. We have observed half a dozen custom lowerings from PyTorch to MLIR, making it easier for hardware vendors to focus on their unique value, rather than needing to implement yet another PyTorch frontend for MLIR. The ultimate aim is to be similar to the current hardware vendors adding LLVM target support, rather than each one implementing Clang or a C++ frontend.

[![Release Build](https://github.com/llvm/torch-mlir/actions/workflows/buildRelease.yml/badge.svg)](https://github.com/llvm/torch-mlir/actions/workflows/buildRelease.yml)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit)](https://github.com/pre-commit/pre-commit)

## All the roads from PyTorch to Torch MLIR Dialect

Expand Down Expand Up @@ -76,7 +76,7 @@ pip install torch-mlir -f https://github.com/llvm/torch-mlir-release/releases/ex

## Demos

### TorchScript ResNet18
### TorchScript ResNet18

Standalone script to Convert a PyTorch ResNet18 model to MLIR and run it on the CPU Backend:

Expand Down
20 changes: 15 additions & 5 deletions build_tools/autogen_ltc_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@
TORCHGEN_DIR = Path(torchgen.__path__[0]).resolve()
TORCH_MLIR_DIR = Path(__file__).resolve().parent.parent


def reindent(text, prefix=""):
return indent(dedent(text), prefix)

Expand Down Expand Up @@ -75,7 +76,11 @@ def lowering_function(self, schema: LazyIrSchema):
)

# Only create this variable if it's used to avoid Wunused-variable
operand_idx_counter = "size_t i = 0;" if "i++" in (emplace_arguments_str + emplace_kwarguments) else ""
operand_idx_counter = (
"size_t i = 0;"
if "i++" in (emplace_arguments_str + emplace_kwarguments)
else ""
)

return reindent(
f"""
Expand Down Expand Up @@ -111,12 +116,16 @@ def __init__(self, binary_dir):
)
assert self.torch_ops_file.exists()
self.binary_dir = Path(binary_dir)
assert self.binary_dir.is_dir(), f"Binary directory not found: {self.binary_dir}"
assert (
self.binary_dir.is_dir()
), f"Binary directory not found: {self.binary_dir}"
self.source_yaml = self.binary_dir.joinpath("generated_native_functions.yaml")
self.backend_path = TORCH_MLIR_DIR.joinpath(
"projects", "ltc", "csrc", "base_lazy_backend"
)
assert self.backend_path.is_dir(), f"Backend path not found: {self.backend_path}"
assert (
self.backend_path.is_dir()
), f"Backend path not found: {self.backend_path}"
self.generated_path = self.binary_dir.joinpath(
"projects", "ltc", "csrc", "base_lazy_backend", "generated"
)
Expand Down Expand Up @@ -168,8 +177,9 @@ def generate_native_functions(self):
if ts_native_yaml_path.exists():
ts_native_yaml = yaml.load(ts_native_yaml_path.read_text(), yaml.CLoader)
else:
logging.warning(f"Could not find `ts_native_functions.yaml` at {ts_native_yaml_path}")

logging.warning(
f"Could not find `ts_native_functions.yaml` at {ts_native_yaml_path}"
)

parsed_yaml = parse_native_yaml(native_yaml_path, tags_yaml_path)
self.native_functions = parsed_yaml.native_functions
Expand Down
9 changes: 4 additions & 5 deletions build_tools/ci/test_posix.sh
Original file line number Diff line number Diff line change
Expand Up @@ -26,17 +26,16 @@ echo "::endgroup::"

case $torch_version in
nightly)
# Failing with: NotImplementedError:
# Failing with: NotImplementedError:
# Could not run 'aten::empty.memory_format' with arguments from the 'Lazy' backend.
# As of 2024-01-07
# echo "::group::Run Lazy Tensor Core e2e integration tests"
# python -m e2e_testing.main --config=lazy_tensor_core -v
# echo "::endgroup::"

# TODO: There is one failing test in this group on stable. It could
# be xfailed vs excluding entirely.
echo "::group::Run TorchDynamo e2e integration tests"
python -m e2e_testing.main --config=torchdynamo -v
# TODO: Need to verify in the stable version
echo "::group::Run FxImporter e2e integration tests"
python -m e2e_testing.main --config=fx_importer -v
echo "::endgroup::"
;;
stable)
Expand Down
8 changes: 3 additions & 5 deletions build_tools/python_deploy/build_linux_packages.sh
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,7 @@ function _check_file_not_changed_by() {

function test_in_tree() {
local torch_version="$1"

echo ":::: Test in-tree"
cmake --build /main_checkout/torch-mlir/build --target check-torch-mlir-all

Expand All @@ -308,10 +308,8 @@ function test_in_tree() {
echo ":::: Run Onnx e2e integration tests"
python -m e2e_testing.main --config=onnx -v

# Dynamo is changing a lot in nightly versions, and thus the implementation
# tends to become incompatible to the stable version.
echo ":::: Run TorchDynamo e2e integration tests"
python -m e2e_testing.main --config=torchdynamo -v
echo ":::: Run FxImporter e2e integration tests"
python -m e2e_testing.main --config=fx_importer -v
;;
stable)
echo ":::: Test with stable torch"
Expand Down
12 changes: 7 additions & 5 deletions build_tools/scrape_releases.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,26 +2,28 @@
See https://github.com/llvm/torch-mlir/issues/1374
"""

import argparse
import json

import requests

# Parse arguments
parser = argparse.ArgumentParser()
parser.add_argument('owner', type=str)
parser.add_argument('repo', type=str)
parser.add_argument("owner", type=str)
parser.add_argument("repo", type=str)
args = parser.parse_args()

# Get releases
response = requests.get(
f"https://api.github.com/repos/{args.owner}/{args.repo}/releases")
f"https://api.github.com/repos/{args.owner}/{args.repo}/releases"
)
body = json.loads(response.content)

# Parse releases
releases = []
for row in body:
for asset in row['assets']:
for asset in row["assets"]:
releases.append((asset["name"], asset["browser_download_url"]))

# Output HTML
Expand All @@ -33,4 +35,4 @@
html += f" <a href='{url}'>{name}</a><br />\n"
html += """ </body>
</html>"""
print(html)
print(html)
2 changes: 1 addition & 1 deletion docs/abstract_interp_lib.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ that this is minimal.

## Adding to the abstract interpretation library

See [Adding a Shape and Dtype Function](adding_a_shape_and_dtype_function.md)
See [Adding Abstract Interpretation Functions](adding_abstract_interpretation_functions.md)
for details on how to add a shape and dtype function for an operator.

## Rationale
Expand Down
2 changes: 1 addition & 1 deletion docs/add_ops.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@

Collected links and contacts for how to add ops to torch-mlir.


<details>
<summary>Turbine Camp: Start Here</summary>
This document was previously known as `turbine-camp.md` to Nod.ai. "Turbine Camp" is part of Nod.ai's onboarding process. Welcome to turbine camp. This document originated at Nod.ai as a part of onboardding process, where new nod-ai folks learn about the architecture of our work by adding support for 2 ops to torch-mlir. I decided to put this into torch mlir because a lot of this is about torch-mlir.
Expand All @@ -27,6 +26,7 @@ The details of how we do it and helpful commands to help you set up each repo is
PS: IREE is pronounced Eerie, and hence the ghost icon.

## How to begin
0. Set up torch-mlir according to the instructions here: https://github.com/llvm/torch-mlir/blob/main/docs/development.md
1. You will start by adding support for 2 ops in torch-mlir, to get you familiar with the center of our pipeline. Begin by reading [torch-mlir's documentation on how to implement a new torch op](https://github.com/llvm/torch-mlir/blob/main/docs/Torch-ops-E2E-implementation.md), and set up `llvm/torch_mlir` using https://github.com/llvm/torch-mlir/blob/main/docs/development.md
2. Pick 1 of the yet-unimplemented from the following. You should choose something that looks easy to you. **Make sure you create an issue by clicking the little "target" icon to the right of the op, thereby marking the op as yours**
- [TorchToLinalg ops tracking issue](https://github.com/nod-ai/SHARK-Turbine/issues/347)
Expand Down
13 changes: 13 additions & 0 deletions docs/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,19 @@ python -m pip install -r requirements.txt
python -m pip install -r torchvision-requirements.txt
```

## (Optional) Set up pre-commit

This project uses [pre-commit](https://pre-commit.com/) in its CI. You can
install it locally too in order to lint and fix your code prior to the CI
complaining about it.

```shell
pip install pre-commit
# You can run interactively with `pre-commit run`
# or install hooks so it runs automatically:
pre-commit install
```

## CMake Build

Two setups are possible to build: in-tree and out-of-tree. The in-tree setup is the most straightforward, as it will build LLVM dependencies as well.
Expand Down
1 change: 0 additions & 1 deletion docs/importers/onnx_importer.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,4 +140,3 @@ torch-mlir's representation:
* `ConstantOfShape`: Mapped to `torch.vtensor.literal` with
a corresponding `value` attribute.
1 change: 0 additions & 1 deletion docs/roadmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -277,4 +277,3 @@ directly provided a way to plug into this.

Additionally, we can leverage the [`pytorch-jit-paritybench`](https://github.com/jansel/pytorch-jit-paritybench)
to verify our end-to-end correctness on real models.

2 changes: 1 addition & 1 deletion include/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
add_subdirectory(torch-mlir)
add_subdirectory(torch-mlir-dialects)
add_subdirectory(torch-mlir-dialects)
Loading

0 comments on commit 5bc4220

Please sign in to comment.